Impacts of frequency increment errors on frequency diverse array beampattern
NASA Astrophysics Data System (ADS)
Gao, Kuandong; Chen, Hui; Shao, Huaizong; Cai, Jingye; Wang, Wen-Qin
2015-12-01
Different from conventional phased array, which provides only angle-dependent beampattern, frequency diverse array (FDA) employs a small frequency increment across the antenna elements and thus results in a range angle-dependent beampattern. However, due to imperfect electronic devices, it is difficult to ensure accurate frequency increments, and consequently, the array performance will be degraded by unavoidable frequency increment errors. In this paper, we investigate the impacts of frequency increment errors on FDA beampattern. We derive the beampattern errors caused by deterministic frequency increment errors. For stochastic frequency increment errors, the corresponding upper and lower bounds of FDA beampattern error are derived. They are verified by numerical results. Furthermore, the statistical characteristics of FDA beampattern with random frequency increment errors, which obey Gaussian distribution and uniform distribution, are also investigated.
Spatial frequency domain error budget
Hauschildt, H; Krulewich, D
1998-08-27
The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine
Derivational Morphophonology: Exploring Errors in Third Graders' Productions
ERIC Educational Resources Information Center
Jarmulowicz, Linda; Hay, Sarah E.
2009-01-01
Purpose: This study describes a post hoc analysis of segmental, stress, and syllabification errors in third graders' productions of derived English words with the stress-changing suffixes "-ity" and "-ic." We investigated whether (a) derived word frequency influences error patterns, (b) stress and syllabification errors always co-occur, and (c)…
Operational Single-Frequency GPS Error Maps
NASA Astrophysics Data System (ADS)
Bishop, G. J.; Doherty, P.; Decker, D.; Delay, S.; Sexton, E.; Citrone, P.; Scro, K.; Wilkes, R.
2001-12-01
The Air Force Research Laboratory and Detachment 11, Space & Missile Systems Center have implemented a new system of graphical products that provide easy-to-visualize displays of space weather effects on theater-based radio systems operating through the ionosphere. This system, the Operational Space Environment Network Display (OpSEND), is now producing its first four products at 55th Space Weather Squadron (55SWXS) in Colorado Springs. One of these products, the OpSEND Estimated GPS Single-Frequency Error Map, provides a current specification (nowcast) and one-hour forecast of estimated positioning errors that result from inaccurate ionospheric correction and GPS constellation geometry. Two-frequency GPS receivers can measure ionospheric range errors due to ionospheric total electron content (TEC), but single-frequency receivers depend on a built-in Ionospheric Correction Algorithm (ICA) for ionospheric error mitigation. The ICA, developed at AFRL in the 1970's corrects for roughly half of the ionospheric error. In the OpSEND GPS Single-Frequency Error Map, position error due to the ionosphere is based on the differences between ionospheric estimates from ICA and those generated by more accurate global ionospheric specification from the PRISM model, updated by real-time TEC data from a global set of monitor stations. Details and examples of the OpSEND system and the GPS Error Map will be presented, as well as results of initial GPS Error Map validation studies, comparing GPS error predictions and PRISM TEC specifications with observational data.
Frequency of pediatric medication administration errors and contributing factors.
Ozkan, Suzan; Kocaman, Gulseren; Ozturk, Candan; Seren, Seyda
2011-01-01
This study examined the frequency of pediatric medication administration errors and contributing factors. This research used the undisguised observation method and Critical Incident Technique. Errors and contributing factors were classified through the Organizational Accident Model. Errors were made in 36.5% of the 2344 doses that were observed. The most frequent errors were those associated with administration at the wrong time. According to the results of this study, errors arise from problems within the system.
The Relative Frequency of Spanish Pronunciation Errors.
ERIC Educational Resources Information Center
Hammerly, Hector
Types of hierarchies of pronunciation difficulty are discussed, and a hierarchy based on contrastive analysis plus informal observation is proposed. This hierarchy is less one of initial difficulty than of error persistence. One feature of this hierarchy is that, because of lesser learner awareness and very limited functional load, errors…
Assessment of Errors in AMSR-E Derived Soil Moisture
NASA Astrophysics Data System (ADS)
Champagne, C.; McNairn, H.; Berg, A.; de Jeu, R. A.
2009-05-01
Soil moisture derived from passive microwave satellites provides information at a coarse spatial scale, but with temporally frequent, global coverage that can be used for monitoring applications over agricultural regions. Passive microwave satellites measure surface brightness temperature, which is largely a function of vegetation water content (which is directly related to the vegetation optical depth), surface temperature and surface soil moisture at low frequencies. Retrieval algorithms for global soil moisture data sets by necessity require limited site-specific information to derive these parameters, and as such may show variations in local accuracy. The objective of this study is to examine the errors in passive microwave soil moisture data over agricultural sites in Canada to provide guidelines on data quality assessment for using these data sets in monitoring applications. Global gridded soil moisture was acquired from the AMSR-E satellite using the Land Parameter Retrieval Model, LPRM (Owe et al., 2008). The LPRM model derives surface soil moisture through an iterative optimization procedure using a polarization difference index to estimate vegetation optical depth and surface dielectric constant using frequencies at 6.9 and 10.7 GHz. The LPRM model requires no a-priori information on surface conditions, but retrieval errors are expected to increase as the amount of open water and dense vegetation within each pixel increases (Owe et al., 2008) Satellite-derived LPRM soil moisture values were used to assess changes in soil moisture retrieval accuracy over the 2007 growing season for a largely agricultural site near Guelph (Ontario), Canada. Accuracy was determined by validating LPRM soil moisture against a network of 16 in-situ monitoring sites distributed at the pixel scale for AMSR-E. Changes in squared error, and pairwise correlation coefficient between satellite and in-situ surface soil moisture were assessed against changes in satellite orbit and
Compensation Low-Frequency Errors in TH-1 Satellite
NASA Astrophysics Data System (ADS)
Wang, Jianrong; Wang, Renxiang; Hu, Xin
2016-06-01
The topographic mapping products at 1:50,000 scale can be realized using satellite photogrammetry without ground control points (GCPs), which requires the high accuracy of exterior orientation elements. Usually, the attitudes of exterior orientation elements are obtained from the attitude determination system on the satellite. Based on the theoretical analysis and practice, the attitude determination system exists not only the high-frequency errors, but also the low-frequency errors related to the latitude of satellite orbit and the time. The low-frequency errors would affect the location accuracy without GCPs, especially to the horizontal accuracy. In SPOT5 satellite, the latitudinal model was proposed to correct attitudes using approximately 20 calibration sites data, and the location accuracy was improved. The low-frequency errors are also found in Tian Hui 1 (TH-1) satellite. Then, the method of compensation low-frequency errors is proposed in ground image processing of TH-1, which can detect and compensate the low-frequency errors automatically without using GCPs. This paper deal with the low-frequency errors in TH-1: First, the analysis about low-frequency errors of the attitude determination system is performed. Second, the compensation models are proposed in bundle adjustment. Finally, the verification is tested using data of TH-1. The testing results show: the low-frequency errors of attitude determination system can be compensated during bundle adjustment, which can improve the location accuracy without GCPs and has played an important role in the consistency of global location accuracy.
Antenna pointing systematic error model derivations
NASA Technical Reports Server (NTRS)
Guiar, C. N.; Lansing, F. L.; Riggs, R.
1987-01-01
The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.
Derivational Morphology and Base Morpheme Frequency
ERIC Educational Resources Information Center
Ford, M. A.; Davis, M. H.; Marslen-Wilson, W. D.
2010-01-01
Morpheme frequency effects for derived words (e.g. an influence of the frequency of the base "dark" on responses to "darkness") have been interpreted as evidence of morphemic representation. However, it has been suggested that most derived words would not show these effects if family size (a type frequency count claimed to reflect semantic…
Analysis on optical heterodyne frequency error of full-field heterodyne interferometer
NASA Astrophysics Data System (ADS)
Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli
2017-06-01
The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.
Preventing medication errors in community pharmacy: frequency and seriousness of medication errors
Knudsen, P; Herborg, H; Mortensen, A R; Knudsen, M; Hellebek, A
2007-01-01
Background Medication errors are a widespread problem which can, in the worst case, cause harm to patients. Errors can be corrected if documented and evaluated as a part of quality improvement. The Danish community pharmacies are committed to recording prescription corrections, dispensing errors and dispensing near misses. This study investigated the frequency and seriousness of these errors. Methods 40 randomly selected Danish community pharmacies collected data for a defined period. The data included four types of written report of incidents, three of which already existed at the pharmacies: prescription correction, dispensing near misses and dispensing errors. Data for the fourth type of report, on adverse drug events, were collected through a web‐based reporting system piloted for the project. Results There were 976 cases of prescription corrections, 229 cases of near misses, 203 cases of dispensing errors and 198 cases of adverse drug events. The error rate was 23/10 000 prescriptions for prescription corrections, 1/10 000 for dispensing errors and 2/10 000 for near misses. The errors that reached the patients were pooled for separate analysis. Most of these errors, and the potentially most serious ones, occurred in the transcription stage of the dispensing process. Conclusion Prescribing errors were the most frequent type of error reported. Errors that reached the patients were not frequent, but most of them were potentially harmful, and the absolute number of medication errors was high, as provision of medicine is a frequent event in primary care in Denmark. Patient safety could be further improved by optimising the opportunity to learn from the incidents described. PMID:17693678
Dependence of error sensitivity of frequency on bias voltage in force-balanced micro accelerometer
NASA Astrophysics Data System (ADS)
Chen, Lili; Zhou, Wu
2013-06-01
To predict more precisely the frequency of force-balanced micro accelerometer with different bias voltages, the effects of bias voltages on error sensitivity of frequency is studied. The resonance frequency of accelerometer under closed loop control is derived according to its operation principle, and its error sensitivity is derived and analyzed under over etching structure according to the characteristics of Deep Reaction Ion Etching (DRIE). Based on the theoretical results, micro accelerometer is fabricated and tested to study the influences of AC bias voltage and DC bias voltage on sensitivity, respectively. Experimental results indicate that the relative errors between test data and theory data are less than 7%, and the fluctuating value of error sensitivity under the range of voltage adjustment is less than 0.01 μm-1. It is concluded that the error sensitivity with designed parameters of structure, circuit and process error can be used to predict the frequency of accelerometer with no need to consider the influence of bias voltage.
Digital frequency error detectors for OQPSK satellite modems
NASA Astrophysics Data System (ADS)
Ahmad, J.; Jeans, T. G.; Evans, B. G.
1991-09-01
Two algorithms for frequency error detection in OQPSK satellite modems are presented. The results of computer simulations in respect of acquisition and noise performance are given. These algorithms are suitable for DSP implementation and are applicable to mobile satellite systems in which significant Doppler shift is experienced.
Frequency analysis of nonlinear oscillations via the global error minimization
NASA Astrophysics Data System (ADS)
Kalami Yazdi, M.; Hosseini Tehrani, P.
2016-06-01
The capacity and effectiveness of a modified variational approach, namely global error minimization (GEM) is illustrated in this study. For this purpose, the free oscillations of a rod rocking on a cylindrical surface and the Duffing-harmonic oscillator are treated. In order to validate and exhibit the merit of the method, the obtained result is compared with both of the exact frequency and the outcome of other well-known analytical methods. The corollary reveals that the first order approximation leads to an acceptable relative error, specially for large initial conditions. The procedure can be promisingly exerted to the conservative nonlinear problems.
Pyranometer frequency response measurement and general correction scheme for time response error
Shen, B.; Robinson, A.M. )
1992-10-01
A simple sinusoidal function radiation generator was designed to examine the frequency response of a Kipp and Zonen CM-5 pyranometer in the frequency range 0.014-0.073 Hz. Applying the thermal model of the pyranometer and its two time constants, which were acquired from a step response measurement, the authors obtained the theoretical frequency response of the pyranometer. Analysis of the experimental results determined an unknown constant in the relationship derived between the pyranometer input and output. This relationship was then used to correct the time response error of the pyranometer subject to an arbitrary radiation signal.
NASA Astrophysics Data System (ADS)
Li, Guofa; Huang, Wei; Zheng, Hao; Zhang, Baoqing
2016-02-01
The spectral ratio method (SRM) is widely used to estimate quality factor Q via the linear regression of seismic attenuation under the assumption of a constant Q. However, the estimate error will be introduced when this assumption is violated. For the frequency-dependent Q described by a power-law function, we derived the analytical expression of estimate error as a function of the power-law exponent γ and the ratio of the bandwidth to the central frequency σ . Based on the theoretical analysis, we found that the estimate errors are mainly dominated by the exponent γ , and less affected by the ratio σ . This phenomenon implies that the accuracy of the Q estimate can hardly be improved by adjusting the width and range of the frequency band. Hence, we proposed a two-parameter regression method to estimate the frequency-dependent Q from the nonlinear seismic attenuation. The proposed method was tested using the direct waves acquired by a near-surface cross-hole survey, and its reliability was evaluated in comparison with the result of SRM.
NASA Technical Reports Server (NTRS)
Moore, H. J.; Wu, S. C.
1973-01-01
The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.
NASA Astrophysics Data System (ADS)
Kim, B. C.; Tinin, M. V.
The second-order Rytov approximation has been used to determine ionospheric corrections for the phase path up to third order. We show the transition of the derived expressions to previous results obtained within the ray approximation using the second-order approximation of perturbation theory by solving the eikonal equation. The resulting equation for the phase path is used to determine the residual ionospheric first-, second- and third-order errors of a dual-frequency navigation system, with diffraction effects taken into account. Formulas are derived for the biases and variances of these errors, and these formulas are analyzed and modeled for a turbulent ionosphere. The modeling results show that the third-order error that is determined by random irregularities can be dominant in the residual errors. In particular, the role of random irregularities is enhanced for small elevation angles. Furthermore, in the case of small angles the role of diffraction effects increases. It is pointed out that a need to pass on to diffraction formulas arises when the Fresnel radius exceeds the inner scale of turbulence.
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Chao; Sun, Zhao-Yang; Zhao, Yu; Wu, Shi-You; Fang, Guang-You
2016-08-01
In the terahertz (THz) band, the inherent shake of the human body may strongly impair the image quality of a beam scanning single frequency holography system for personnel screening. To realize accurate shake compensation in imaging processing, it is quite necessary to develop a high-precision measure system. However, in many cases, different parts of a human body may shake to different extents, resulting in greatly increasing the difficulty in conducting a reasonable measurement of body shake errors for image reconstruction. In this paper, a body shake error compensation algorithm based on the raw data is proposed. To analyze the effect of the body shake on the raw data, a model of echoed signal is rebuilt with considering both the beam scanning mode and the body shake. According to the rebuilt signal model, we derive the body shake error estimated method to compensate for the phase error. Simulation on the reconstruction of point targets with shake errors and proof-of-principle experiments on the human body in the 0.2-THz band are both performed to confirm the effectiveness of the body shake compensation algorithm proposed. Project supported by the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant No. YYYJ-1123).
Systematic Error Mitigation in DORIS Derived Geocenter Motion
NASA Astrophysics Data System (ADS)
Couhert, A.; Mercier, F.; Moyard, J.; Biancale, R.
2016-12-01
The relative orbit-centering stability between the different Jason POD analysis centers (JPL, GSFC, ESOC and CNES) is usually assessed by comparing orbits in the North-South direction (Z-component of the terrestrial reference frame). Any miscentering of orbit in this direction is of primary interest since it impacts significantly studies of global and regional Mean Sea Level (Global MSL error = -0.16 x DZ, where DZ is the mean orbit error in Z). The main contribution to this miscentering effect on the orbits comes from the tracking measurements. Indeed, even though satellites ideally orbit around the center-of-mass of the total Earth system (CM or geocenter), the strength of the tie to the reference network origin is dependent on the tracking measurement used in the process of orbit determination: 100% for SLR-only orbits, 75% for DORIS-only orbits, and 30% for GPS-derived orbits (depending on the ambiguity fixing strategy, and relative to the reference given by the GPS orbits/clocks solution). The well-known seasonal signature in Z ( 5 mm) observed between DORIS/SLR and GPS-based orbits may in part be due to the un-modeled non-tidal component of the geocenter motion, as of yet there is no consensus model for non-tidal geocenter motion. Thus, we will examine strategies to mitigate sensitivity to miscentering effects on the orbit coming from the DORIS tracking measurements; in this way the use of a model of the motion of the CF with respect to the CM won't be needed. Estimations of the geocenter motion have already been successfully achieved using the SLR network, but the DORIS network derived geocenter motion has been reported to be noisier with larger systematic errors. Yet, due to the more numerous and better uniformly distributed DORIS stations across the globe, it could have the potential to yield competitive results, once the systematic errors are identified and mitigated, as will be shown in this paper. The obtained orbit parameterization will be tested on
PULSAR TIMING ERRORS FROM ASYNCHRONOUS MULTI-FREQUENCY SAMPLING OF DISPERSION MEASURE VARIATIONS
Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Dolch, T.
2015-03-10
Free electrons in the interstellar medium cause frequency-dependent delays in pulse arrival times due to both scattering and dispersion. Multi-frequency measurements are used to estimate and remove dispersion delays. In this paper, we focus on the effect of any non-simultaneity of multi-frequency observations on dispersive delay estimation and removal. Interstellar density variations combined with changes in the line of sight from pulsar and observer motions cause dispersion measure (DM) variations with an approximately power-law power spectrum, augmented in some cases by linear trends. We simulate time series, estimate the magnitude and statistical properties of timing errors that result from non-simultaneous observations, and derive prescriptions for data acquisition that are needed in order to achieve a specified timing precision. For nearby, highly stable pulsars, measurements need to be simultaneous to within about one day in order for the timing error from asynchronous DM correction to be less than about 10 ns. We discuss how timing precision improves when increasing the number of dual-frequency observations used in DM estimation for a given epoch. For a Kolmogorov wavenumber spectrum, we find about a factor of two improvement in precision timing when increasing from two to three observations but diminishing returns thereafter.
Jason-2 systematic error analysis in the GPS derived orbits
NASA Astrophysics Data System (ADS)
Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.
2011-12-01
Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced
Mitchell Scott, Belinda; Considine, Julie; Botti, Mari
2014-11-01
Medication safety is of increasing importance and understanding the nature and frequency of medication errors in the Emergency Department (ED) will assist in tailoring interventions which will make patient care safer. The challenge with the literature to date is the wide variability in the frequency of errors reported and the reliance on incident reporting practices of busy ED staff. A prospective, exploratory descriptive design using point prevalence surveys was used to establish the frequency of observed medication errors in the ED. In addition, data related to contextual factors such as ED patients, staffing and workload were also collected during the point prevalence surveys to enable the analysis of relationships between the frequency and nature of specific error types and patient and ED characteristics at the time of data collection. A total of 172 patients were included in the study: 125 of whom patients had a medication chart. The prevalence of medication errors in the ED studied was 41.2% for failure to apply patient ID bands, 12.2% for failure to document allergy status and 38.4% for errors of omission. The proportion of older patients in the ED did not affect the frequency of medication errors. There was a relationship between high numbers of ATS 1, 2 and 3 patients (indicating high levels of clinical urgency) and increased rates of failure to document allergy status. Medication errors were affected by ED occupancy, when cubicles in the ED were over 50% occupied, medication errors occurred more frequently. ED staffing affects the frequency of medication errors, there was an increase in failure to apply ID bands and errors of omission when there were unfilled nursing deficits and lower levels of senior medical staff were associated with increased errors of omission. Medication errors related to patient identification, allergy status and medication omissions occur more frequently in the ED when the ED is busy, has sicker patients and when the staffing is
Fehlerhaeufigkeit im Englischunterricht (Error Frequency in English Teaching)
ERIC Educational Resources Information Center
Heyder, Egon
1976-01-01
Research conducted at a German teachers' college revealed that in English instruction at a "Comprehensive" School, equal amounts of corrective measures were devoted to each of the various types of errors. It is recommended that differentiation be made between the importance of the categories of errors. (Text is in German.) (IFS/WGA)
Single trial time-frequency domain analysis of error processing in post-traumatic stress disorder.
Clemans, Zachary A; El-Baz, Ayman S; Hollifield, Michael; Sokhadze, Estate M
2012-09-13
Error processing studies in psychology and psychiatry are relatively common. Event-related potentials (ERPs) are often used as measures of error processing, two such response-locked ERPs being the error-related negativity (ERN) and the error-related positivity (Pe). The ERN and Pe occur following committed error in reaction time tasks as low frequency (4-8 Hz) electroencephalographic (EEG) oscillations registered at the midline fronto-central sites. We created an alternative method for analyzing error processing using time-frequency analysis in the form of a wavelet transform. A study was conducted in which subjects with PTSD and healthy control completed a forced-choice task. Single trial EEG data from errors in the task were processed using a continuous wavelet transform. Coefficients from the transform that corresponded to the theta range were averaged to isolate a theta waveform in the time-frequency domain. Measures called the time-frequency ERN and Pe were obtained from these waveforms for five different channels and then averaged to obtain a single time-frequency ERN and Pe for each error trial. A comparison of the amplitude and latency for the time-frequency ERN and Pe between the PTSD and control group was performed. A significant group effect was found on the amplitude of both measures. These results indicate that the developed single trial time-frequency error analysis method is suitable for examining error processing in PTSD and possibly other psychiatric disorders.
Takada, Kazumasa; Satoh, Shin-ichi
2006-02-01
We describe a method for measuring the phase error distribution of an arrayed waveguide grating (AWG) in the frequency domain when the free spectral range (FSR) of the AWG is so wide that it cannot be covered by one tunable laser source. Our method is to sweep the light frequency in the neighborhoods of two successive peaks in the AWG transmission spectrum by using two laser sources with different tuning ranges. The method was confirmed experimentally by applying it to a 160 GHz spaced AWG with a FSR of 11 THz. The variations in the derived phase error data were very small at +/-0.02 rad around the central arrayed waveguides.
A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.
Liu, Shuo; Zhang, Lei; Li, Jian
2016-11-24
The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.
A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass
Liu, Shuo; Zhang, Lei; Li, Jian
2016-01-01
The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass. PMID:27886153
Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali
2015-08-01
In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.
NASA Astrophysics Data System (ADS)
Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan; Zhong, Xianyun
2016-09-01
Extreme optical fabrication projects known as EUV and X-ray optic systems, which are representative of today's advanced optical manufacturing technology level, have special requirements for the optical surface quality. In synchroton radiation (SR) beamlines, mirrors of high shape accuracy is always used in grazing incidence. In nanolithograph systems, middle spatial frequency errors always lead to small-angle scattering or flare that reduces the contrast of the image. The slope error is defined for a given horizontal length, the increase or decrease in form error at the end point relative to the starting point is measured. The quality of reflective optical elements can be described by their deviation from ideal shape at different spatial frequencies. Usually one distinguishes between the figure error, the low spatial error part ranging from aperture length to 1mm frequencies, and the mid-high spatial error part from 1mm to 1 μm and from1 μm to some 10 nm spatial frequencies, respectively. Firstly, this paper will disscuss the relationship between slope error and middle spatial frequency error, which both describe the optical surface error along with the form profile. Then, experimental researches will be conducted on a high gradient precise aspheric with pitch tool, which aim to restraining the middle spatial frequency error.
A Study of the Frequency and Communicative Effects of Errors in Spanish
ERIC Educational Resources Information Center
Guntermann, Gail
1978-01-01
A study conducted in El Salvador was designed to: determine which kinds of errors may be most frequently committed by learners who have reached a basic level of proficiency: discover which high-frequency errors most impede comprehension; and develop a procedure for eliciting evaluational reactions to errors from native listeners. (SW)
Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.
Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan
2015-01-01
Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars.
Bounding higher-order ionosphere errors for the dual-frequency GPS user
NASA Astrophysics Data System (ADS)
Datta-Barua, S.; Walter, T.; Blanch, J.; Enge, P.
2008-10-01
Civil signals at L2 and L5 frequencies herald a new phase of Global Positioning System (GPS) performance. Dual-frequency users typically assume a first-order approximation of the ionosphere index of refraction, combining the GPS observables to eliminate most of the ranging delay, on the order of meters, introduced into the pseudoranges. This paper estimates the higher-order group and phase errors that occur from assuming the ordinary first-order dual-frequency ionosphere model using data from the Federal Aviation Administration's Wide Area Augmentation System (WAAS) network on a solar maximum quiet day and an extremely stormy day postsolar maximum. We find that during active periods, when ionospheric storms may introduce slant range delays at L1 as high as 100 m, the higher-order group errors in the L1-L2 or L1-L5 dual-frequency combination can be tens of centimeters. The group and phase errors are no longer equal and opposite, so these errors accumulate in carrier smoothing of the dual-frequency code observable. We show the errors in the carrier-smoothed code are due to higher-order group errors and, to a lesser extent, to higher-order phase rate errors. For many applications, this residual error is sufficiently small as to be neglected. However, such errors can impact geodetic applications as well as the error budgets of GPS Augmentation Systems providing Category III precision approach.
To Err is Normable: The Computation of Frequency-Domain Error Bounds from Time-Domain Data
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard
1998-01-01
This paper exploits the relationships among the time-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling errors.
Evaluating Error of LIDAR Derived dem Interpolation for Vegetation Area
NASA Astrophysics Data System (ADS)
Ismail, Z.; Khanan, M. F. Abdul; Omar, F. Z.; Rahman, M. Z. Abdul; Mohd Salleh, M. R.
2016-09-01
Light Detection and Ranging or LiDAR data is a data source for deriving digital terrain model while Digital Elevation Model or DEM is usable within Geographical Information System or GIS. The aim of this study is to evaluate the accuracy of LiDAR derived DEM generated based on different interpolation methods and slope classes. Initially, the study area is divided into three slope classes: (a) slope class one (0° - 5°), (b) slope class two (6° - 10°) and (c) slope class three (11° - 15°). Secondly, each slope class is tested using three distinctive interpolation methods: (a) Kriging, (b) Inverse Distance Weighting (IDW) and (c) Spline. Next, accuracy assessment is done based on field survey tachymetry data. The finding reveals that the overall Root Mean Square Error or RMSE for Kriging provided the lowest value of 0.727 m for both 0.5 m and 1 m spatial resolutions of oil palm area, followed by Spline with values of 0.734 m for 0.5 m spatial resolution and 0.747 m for spatial resolution of 1 m. Concurrently, IDW provided the highest RMSE value of 0.784 m for both spatial resolutions of 0.5 and 1 m. For rubber area, Spline provided the lowest RMSE value of 0.746 m for 0.5 m spatial resolution and 0.760 m for 1 m spatial resolution. The highest value of RMSE for rubber area is IDW with the value of 1.061 m for both spatial resolutions. Finally, Kriging gave the RMSE value of 0.790m for both spatial resolutions.
ERIC Educational Resources Information Center
Sampson, Andrew
2012-01-01
This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…
ERIC Educational Resources Information Center
Sampson, Andrew
2012-01-01
This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
Gharekhani, Afshin; Kanani, Negin; Khalili, Hossein; Dashti-Khavidaki, Simin
2014-09-01
Medication errors are ongoing problems among hospitalized patients especially those with multiple co-morbidities and polypharmacy such as patients with renal diseases. This study evaluated the frequency, types and direct related cost of medication errors in nephrology ward and the role played by clinical pharmacists. During this study, clinical pharmacists detected, managed, and recorded the medication errors. Prescribing errors including inappropriate drug, dose, or treatment durations were gathered. To assess transcription errors, the equivalence of nursery charts and physician's orders were evaluated. Administration errors were assessed by observing drugs' preparation, storage, and administration by nurses. The changes in medications costs after implementing clinical pharmacists' interventions were compared with the calculated medications costs if the medication errors were continued up to patients' discharge time. More than 85% of patients experienced medication error. The rate of medication errors was 3.5 errors per patient and 0.18 errors per ordered medication. More than 95% of medication errors occurred at prescription nodes. Most common prescribing errors were omission (26.9%) or unauthorized drugs (18.3%) and low drug dosage or frequency (17.3%). Most of the medication errors happened on cardiovascular drugs (24%) followed by vitamins and electrolytes (22.1%) and antimicrobials (18.5%). The number of medication errors was correlated with the number of ordered medications and length of hospital stay. Clinical pharmacists' interventions decreased patients' direct medication costs by 4.3%. About 22% of medication errors led to patients' harm. In conclusion, clinical pharmacists' contributions in nephrology wards were of value to prevent medication errors and to reduce medications cost.
The frequency and potential causes of dispensing errors in a hospital pharmacy.
Beso, Adnan; Franklin, Bryony Dean; Barber, Nick
2005-06-01
To determine the frequency and types of dispensing errors identified both at the final check stage and outside of a UK hospital pharmacy, to explore the reasons why they occurred, and to make recommendations for their prevention. A definition of a dispensing error and a classification system were developed. To study the frequency and types of errors, pharmacy staff recorded details of all errors identified at the final check stage during a two-week period; all errors identified outside of the department and reported during a one-year period were also recorded. During a separate six-week period, pharmacy staff making dispensing errors identified at the final check stage were interviewed to explore the causes; the findings were analysed using a model of human error. Percentage of dispensed items for which one or more dispensing errors were identified at the final check stage; percentage for which an error was reported outside of the pharmacy department; the active failures, error producing conditions and latent conditions that result in dispensing errors occurring. One or more dispensing errors were identified at the final check stage in 2.1% of 4849 dispensed items, and outside of the pharmacy department in 0.02% of 194,584 items. The majority of those identified at the final check stage involved slips in picking products, or mistakes in making assumptions about the products concerned. Factors contributing to the errors included labelling and storage of containers in the dispensary, interruptions and distractions, a culture where errors are seen as being inevitable, and reliance on others to identify and rectify errors. Dispensing errors occur in about 2% of all dispensed items. About 1 in 100 of these is missed by the final check. The impact on dispensing errors of developments such as automated dispensing systems should be evaluated.
Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures
NASA Astrophysics Data System (ADS)
Liu, Y.; Minnett, P. J.
2014-12-01
Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.
Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid
2014-01-01
This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391
Error-free demodulation of pixelated carrier frequency interferograms.
Servin, M; Estrada, J C
2010-08-16
Recently, pixelated spatial carrier interferograms have been used in optical metrology and are an industry standard nowadays. The main feature of these interferometers is that each pixel over the video camera may be phase-modulated by any (however fixed) desired angle within [0,2pi] radians. The phase at each pixel is shifted without cross-talking from their immediate neighborhoods. This has opened new possibilities for experimental spatial wavefront modulation not dreamed before, because we are no longer constrained to introduce a spatial-carrier using a tilted plane. Any useful mathematical model to phase-modulate the testing wavefront in a pixel-wise basis can be used. However we are nowadays faced with the problem that these pixelated interferograms have not been correctly demodulated to obtain an error-free (exact) wavefront estimation. The purpose of this paper is to offer the general theory that allows one to demodulate, in an exact way, pixelated spatial-carrier interferograms modulated by any thinkable two-dimensional phase carrier.
Disentangling the impacts of outcome valence and outcome frequency on the post-error slowing
Wang, Lijun; Tang, Dandan; Zhao, Yuanfang; Hitchman, Glenn; Wu, Shanshan; Tan, Jinfeng; Chen, Antao
2015-01-01
Post-error slowing (PES) reflects efficient outcome monitoring, manifested as slower reaction time after errors. Cognitive control account assumes that PES depends on error information, whereas orienting account posits that it depends on error frequency. This raises the question how the outcome valence and outcome frequency separably influence the generation of PES. To address this issue, we varied the probability of observation errors (50/50 and 20/80, correct/error) the “partner” committed by employing an observation-execution task and investigated the corresponding behavioral and neural effects. On each trial, participants first viewed the outcome of a flanker-run that was supposedly performed by a ‘partner’, and then performed a flanker-run themselves afterwards. We observed PES in the two error rate conditions. However, electroencephalographic data suggested error-related potentials (oERN and oPe) and rhythmic oscillation associated with attentional process (alpha band) were respectively sensitive to outcome valence and outcome frequency. Importantly, oERN amplitude was positively correlated with PES. Taken together, these findings support the assumption of the cognitive control account, suggesting that outcome valence and outcome frequency are both involved in PES. Moreover, the generation of PES is indexed by oERN, whereas the modulation of PES size could be reflected on the alpha band. PMID:25732237
Phase-modulation method for AWG phase-error measurement in the frequency domain.
Takada, Kazumasa; Hirose, Tomohiro
2009-12-15
We report a phase-modulation method for measuring arrayed waveguide grating (AWG) phase error in the frequency domain. By combining the method with a digital sampling technique that we have already reported, we can measure the phase error within an accuracy of +/-0.055 rad for the center 90% waveguides in the array even when no carrier frequencies are generated in the beat signal from the interferometer.
Video error concealment using block matching and frequency selective extrapolation algorithms
NASA Astrophysics Data System (ADS)
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
On low-frequency errors of uniformly modulated filtered white-noise models for ground motions
Safak, Erdal; Boore, David M.
1988-01-01
Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).
Schnock, Kumiko O; Dykes, Patricia C; Albert, Jennifer; Ariosto, Deborah; Call, Rosemary; Cameron, Caitlin; Carroll, Diane L; Drucker, Adrienne G; Fang, Linda; Garcia-Palm, Christine A; Husch, Marla M; Maddox, Ray R; McDonald, Nicole; McGuire, Julie; Rafie, Sally; Robertson, Emilee; Saine, Deb; Sawyer, Melinda D; Smith, Lisa P; Stinger, Kristy Dixon; Vanderveen, Timothy W; Wade, Elizabeth; Yoon, Catherine S; Lipsitz, Stuart; Bates, David W
2017-02-01
Intravenous medication errors persist despite the use of smart pumps. This suggests the need for a standardised methodology for measuring errors and highlights the importance of identifying issues around smart pump medication administration in order to improve patient safety. We conducted a multisite study to investigate the types and frequency of intravenous medication errors associated with smart pumps in the USA. 10 hospitals of various sizes using smart pumps from a range of vendors participated. Data were collected using a prospective point prevalence approach to capture errors associated with medications administered via smart pumps and evaluate their potential for harm. A total of 478 patients and 1164 medication administrations were assessed. Of the observed infusions, 699 (60%) had one or more errors associated with their administration. Identified errors such as labelling errors and bypassing the smart pump and the drug library were predominantly associated with violations of hospital policy. These types of errors can result in medication errors. Errors were classified according to the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP). 1 error of category E (0.1%), 4 of category D (0.3%) and 492 of category C (excluding deviations of hospital policy) (42%) were identified. Of these, unauthorised medication, bypassing the smart pump and wrong rate were the most frequent errors. We identified a high rate of error in the administration of intravenous medications despite the use of smart pumps. However, relatively few errors were potentially harmful. The results of this study will be useful in developing interventions to eliminate errors in the intravenous medication administration process. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Error Bounds for Quadrature Methods Involving Lower Order Derivatives
ERIC Educational Resources Information Center
Engelbrecht, Johann; Fedotov, Igor; Fedotova, Tanya; Harding, Ansie
2003-01-01
Quadrature methods for approximating the definite integral of a function f(t) over an interval [a,b] are in common use. Examples of such methods are the Newton-Cotes formulas (midpoint, trapezoidal and Simpson methods etc.) and the Gauss-Legendre quadrature rules, to name two types of quadrature. Error bounds for these approximations involve…
Error Bounds for Quadrature Methods Involving Lower Order Derivatives
ERIC Educational Resources Information Center
Engelbrecht, Johann; Fedotov, Igor; Fedotova, Tanya; Harding, Ansie
2003-01-01
Quadrature methods for approximating the definite integral of a function f(t) over an interval [a,b] are in common use. Examples of such methods are the Newton-Cotes formulas (midpoint, trapezoidal and Simpson methods etc.) and the Gauss-Legendre quadrature rules, to name two types of quadrature. Error bounds for these approximations involve…
Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael
2014-04-01
We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.
Correction of Frequency-Dependent Nonlinear Errors in Direct-Conversion Transceivers
2016-03-31
Correction of Frequency-Dependent Nonlinear Errors in Direct-Conversion Transceivers Blake James & Caleb Fulton Advanced Radar Research Center...University of Oklahoma Norman, Oklahoma, USA, 73019 pyraminxrox@ou.edu, fulton@ou.edu Abstract: Correction of nonlinear and frequency dependent...frequency-dependent nonlinear distortion in modern highly digital phased arrays. The work presented here is done in the context of calibrating the
Crosslinking EEG time-frequency decomposition and fMRI in error monitoring.
Hoffmann, Sven; Labrenz, Franziska; Themann, Maria; Wascher, Edmund; Beste, Christian
2014-03-01
Recent studies implicate a common response monitoring system, being active during erroneous and correct responses. Converging evidence from time-frequency decompositions of the response-related ERP revealed that evoked theta activity at fronto-central electrode positions differentiates correct from erroneous responses in simple tasks, but also in more complex tasks. However, up to now it is unclear how different electrophysiological parameters of error processing, especially at the level of neural oscillations are related, or predictive for BOLD signal changes reflecting error processing at a functional-neuroanatomical level. The present study aims to provide crosslinks between time domain information, time-frequency information, MRI BOLD signal and behavioral parameters in a task examining error monitoring due to mistakes in a mental rotation task. The results show that BOLD signal changes reflecting error processing on a functional-neuroanatomical level are best predicted by evoked oscillations in the theta frequency band. Although the fMRI results in this study account for an involvement of the anterior cingulate cortex, middle frontal gyrus, and the Insula in error processing, the correlation of evoked oscillations and BOLD signal was restricted to a coupling of evoked theta and anterior cingulate cortex BOLD activity. The current results indicate that although there is a distributed functional-neuroanatomical network mediating error processing, only distinct parts of this network seem to modulate electrophysiological properties of error monitoring.
Frequency-domain correction of sensor dynamic error for step response.
Yang, Shuang-Long; Xu, Ke-Jun
2012-11-01
To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.
NASA Astrophysics Data System (ADS)
Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim
2012-12-01
This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.
Direct measurement of the poliovirus RNA polymerase error frequency in vitro
Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B. )
1988-02-01
The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. A fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.
English word frequency and recognition in bilinguals: Inter-corpus comparison and error analysis.
Shi, Lu-Feng
2015-01-01
This study is the second of a two-part investigation on lexical effects on bilinguals' performance on a clinical English word recognition test. Focus is on word-frequency effects using counts provided by four corpora. Frequency of occurrence was obtained for 200 NU-6 words from the Hoosier mental lexicon (HML) and three contemporary corpora, American National Corpora, Hyperspace analogue to language (HAL), and SUBTLEX(US). Correlation analysis was performed between word frequency and error rate. Ten monolinguals and 30 bilinguals participated. Bilinguals were further grouped according to their age of English acquisition and length of schooling/working in English. Word frequency significantly affected word recognition in bilinguals who acquired English late and had limited schooling/working in English. When making errors, bilinguals tended to replace the target word with a word of a higher frequency. Overall, the newer corpora outperformed the HML in predicting error rate. Frequency counts provided by contemporary corpora predict bilinguals' recognition of English monosyllabic words. Word frequency also helps explain top replacement words for misrecognized targets. Word-frequency effects are especially prominent for bilinguals foreign born and educated.
NASA Technical Reports Server (NTRS)
Fetterman, Timothy L.; Noor, Ahmed K.
1987-01-01
Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.
Real-time drift error compensation in a self-reference frequency-scanning fiber interferometer
NASA Astrophysics Data System (ADS)
Tao, Long; Liu, Zhigang; Zhang, Weibo; Liu, Zhe; Hong, Jun
2017-01-01
In order to eliminate the fiber drift errors in a frequency-scanning fiber interferometer, we propose a self-reference frequency-scanning fiber interferometer composed of two fiber Michelson interferometers sharing common optical paths of fibers. One interferometer defined as reference interferometer is used to monitor the optical path length drift in real time and establish a measurement fixed origin. The other is used as a measurement interferometer to acquire the information from the target. Because the measured optical path differences of the reference and measurement interferometers by frequency-scanning interferometry include the same fiber drift errors, the errors can be eliminated by subtraction of the former optical path difference from the latter optical path difference. A prototype interferometer was developed in our research, and experimental results demonstrate its robustness and stability.
Online public reactions to frequency of diagnostic errors in US outpatient care
Giardina, Traber Davis; Sarkar, Urmimala; Gourley, Gato; Modi, Varsha; Meyer, Ashley N.D.; Singh, Hardeep
2016-01-01
Background Diagnostic errors pose a significant threat to patient safety but little is known about public perceptions of diagnostic errors. A study published in BMJ Quality & Safety in 2014 estimated that diagnostic errors affect at least 5% of US adults (or 12 million) per year. We sought to explore online public reactions to media reports on the reported frequency of diagnostic errors in the US adult population. Methods We searched the World Wide Web for any news article reporting findings from the study. We then gathered all the online comments made in response to the news articles to evaluate public reaction to the newly reported diagnostic error frequency (n=241). Two coders conducted content analyses of the comments and an experienced qualitative researcher resolved differences. Results Overall, there were few comments made regarding the frequency of diagnostic errors. However, in response to the media coverage, 44 commenters shared personal experiences of diagnostic errors. Additionally, commentary centered on diagnosis-related quality of care as affected by two emergent categories: (1) US health care providers (n=79; 63 commenters) and (2) US health care reform-related policies, most commonly the Affordable Care Act (ACA) and insurance/reimbursement issues (n=62; 47 commenters). Conclusion The public appears to have substantial concerns about the impact of the ACA and other reform initiatives on the diagnosis-related quality of care. However, policy discussions on diagnostic errors are largely absent from the current national conversation on improving quality and safety. Because outpatient diagnostic errors have emerged as a major safety concern, researchers and policymakers should consider evaluating the effects of policy and practice changes on diagnostic accuracy. PMID:27347474
Bit error rate performance of pi/4-DQPSK in a frequency-selective fast Rayleigh fading channel
NASA Technical Reports Server (NTRS)
Liu, Chia-Liang; Feher, Kamilo
1991-01-01
The bit error rate (BER) performance of pi/4-differential quadrature phase shift keying (DQPSK) modems in cellular mobile communication systems is derived and analyzed. The system is modeled as a frequency-selective fast Rayleigh fading channel corrupted by additive white Gaussian noise (AWGN) and co-channel interference (CCI). The probability density function of the phase difference between two consecutive symbols of M-ary differential phase shift keying (DPSK) signals is first derived. In M-ary DPSK systems, the information is completely contained in this phase difference. For pi/4-DQPSK, the BER is derived in a closed form and calculated directly. Numerical results show that for the 24 kBd (48 kb/s) pi/4-DQPSK operated at a carrier frequency of 850 MHz and C/I less than 20 dB, the BER will be dominated by CCI if the vehicular speed is below 100 mi/h. In this derivation, frequency-selective fading is modeled by two independent Rayleigh signal paths. Only one co-channel is assumed in this derivation. The results obtained are also shown to be valid for discriminator detection of M-ary DPSK signals.
Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors
NASA Astrophysics Data System (ADS)
Yan, Feifei; Chang, Wenge; Li, Xiangyang
2015-12-01
Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.
Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS
NASA Astrophysics Data System (ADS)
Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin
2015-08-01
Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.
Random Numbers Demonstrate the Frequency of Type I Errors: Three Spreadsheets for Class Instruction
ERIC Educational Resources Information Center
Duffy, Sean
2010-01-01
This paper describes three spreadsheet exercises demonstrating the nature and frequency of type I errors using random number generation. The exercises are designed specifically to address issues related to testing multiple relations using correlation (Demonstration I), t tests varying in sample size (Demonstration II) and multiple comparisons…
NASA Technical Reports Server (NTRS)
Tsaoussi, Lucia S.; Koblinsky, Chester J.
1994-01-01
In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in
Derived flood frequency distributions considering individual event hydrograph shapes
NASA Astrophysics Data System (ADS)
Hassini, Sonia; Guo, Yiping
2017-04-01
Derived in this paper is the frequency distribution of the peak discharge rate of a random runoff event from a small urban catchment. The derivation follows the derived probability distribution procedure and incorporates a catchment rainfall-runoff model with approximating shapes for individual runoff event hydrographs. In the past, only simple triangular runoff event hydrograph shapes were used, in this study approximating runoff event hydrograph shapes better representing all the possibilities are considered. The resulting closed-form mathematical equations are converted to the commonly required flood frequency distributions for use in urban stormwater management studies. The analytically determined peak discharge rates of different return periods for a wide range of hypothetical catchment conditions were compared to those determined from design storm modeling. The newly derived equations generated results that are closer to those from design storm modeling and provide a better alternative for use in urban stormwater management studies.
Error analysis for semi-analytic displacement derivatives with respect to shape and sizing variables
NASA Technical Reports Server (NTRS)
Fenyes, Peter A.; Lust, Robert V.
1989-01-01
Sensitivity analysis is fundamental to the solution of structural optimization problems. Consequently, much research has focused on the efficient computation of static displacement derivatives. As originally developed, these methods relied on analytical representations for the derivatives of the structural stiffness matrix (K) with respect to the design variables (b sub i). To extend these methods for use with complex finite element formulations and facilitate their implementation into structural optimization programs using the general finite element method analysis codes, the semi-analytic method was developed. In this method the matrix the derivative of K/the derivative b sub i is approximated by finite difference. Although it is well known that the accuracy of the semi-analytic method is dependent on the finite difference parameter, recent work has suggested that more fundamental inaccuracies exist in the method when used for shape optimization. Another study has argued qualitatively that these errors are related to nonuniform errors in the stiffness matrix derivatives. The accuracy of the semi-analytic method is investigated. A general framework was developed for the error analysis and then it is shown analytically that the errors in the method are entirely accounted for by errors in delta K/delta b sub i. Furthermore, it is demonstrated that acceptable accuracy in the derivatives can be obtained through careful selection of the finite difference parameter.
A statistical comparison of EEG time- and time-frequency domain representations of error processing.
Munneke, Gert-Jan; Nap, Tanja S; Schippers, Eveline E; Cohen, Michael X
2015-08-27
Successful behavior relies on error detection and subsequent remedial adjustment of behavior. Researchers have identified two electrophysiological signatures of error processing: the time-domain error-related negativity (ERN), and the time-frequency domain increased power in the delta/theta frequency bands (~2-8 Hz). The relationship between these two signatures is not entirely clear: on the one hand they occur after the same type of event and with similar latency, but on the other hand, the time-domain ERP component contains only phase-locked activity whereas the time-frequency response additionally contains non-phase-locked dynamics. Here we examined the ERN and error-related delta/theta activity in relation to each other, focusing on within-subject analyses that utilize single-trial data. Using logistic regression, we constructed three statistical models in which the accuracy of each trial was predicted from the ERN, delta/theta power, or both. We found that both the ERN and delta/theta power worked roughly equally well as predictors of single-trial accuracy (~70% accurate prediction). Furthermore, a model including both measures provided a stronger overall prediction compared to either model alone. Based on these findings two conclusions are drawn: first, the phase-locked part of the EEG signal appears to be roughly as predictive of single-trial response accuracy as the non-phase-locked part; second, the single-trial ERP and delta/theta power contain both overlapping and independent information.
NASA Astrophysics Data System (ADS)
Liu, Zhengying; Ren, Aihong; Zhang, Rongzhu; Liu, Jinglun; Sun, Nianchun; Chen, Jianguo
2010-10-01
While the length of polarization period in the periodically poled (PP) waveguides has manufacturing errors (MEs), the impact of this errors on Quasi-Phase-Macthed (QPM) frequency doubling efficiency (FDE), and that of polarization period Λ0 and length of the waveguides at the direction of transmission beams on ME tolerance, which are all theoretically analyzed. The results show that with the ME increasing, FDE decreases rapidly. And the ME tolerance of PP waveguides is inversely proportional to the length of waveguides and is directly proportional to the polarization period Λ0. These results provide a theoretical basis for choosing material of periodically poled crystal (PPC) and controlling MEs.
Where is the effect of frequency in word production? Insights from aphasic picture naming errors
Kittredge, Audrey K.; Dell, Gary S.; Verkuilen, Jay; Schwartz, Myrna F.
2010-01-01
Some theories of lexical access in production locate the effect of lexical frequency at the retrieval of a word’s phonological characteristics, as opposed to the prior retrieval of a holistic representation of the word from its meaning. Yet there is evidence from both normal and aphasic individuals that frequency may influence both of these retrieval processes. This inconsistency is especially relevant in light of recent attempts to determine the representation of another lexical property, age of acquisition or AoA, whose effect is similar to that of frequency. To further explore the representations of these lexical variables in the word retrieval system, we performed hierarchical, multinomial logistic regression analyses of 50 aphasic patients’ picture-naming responses. While both log frequency and AoA had a significant influence on patient accuracy and led to fewer phonologically related errors and omissions, only log frequency had an effect on semantically related errors. These results provide evidence for a lexical access process sensitive to frequency at all stages, but with AoA having a more limited effect. PMID:18704797
Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation
NASA Astrophysics Data System (ADS)
Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu
2016-11-01
Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.
Phoneme frequency effects in jargon aphasia: a phonological investigation of nonword errors.
Robson, Jo; Pring, Tim; Marshall, Jane; Chiat, Shula
2003-04-01
This study investigates the nonwords produced by a jargon speaker, LT. Despite presenting with severe neologistic jargon, LT can produce discrete responses in picture naming tasks thus allowing the properties of his jargon to be investigated. This ability was exploited in two naming tasks. The first showed that LT's nonword errors are related to their targets despite being generally unrecognizable. This relatedness appears to be a general property of his errors suggesting that they are produced by lexical rather than nonlexical means. The second naming task used a set of stimuli controlled for their phonemic content. This allowed an investigation of target phonology at the level of individual phonemes. Nonword responses maintained the English distribution of consonants and showed a significant relationship to the target phonologies. A strong influence of phoneme frequency was identified. High frequency consonants showed a pattern of frequent but indiscriminate use. Low frequency consonants were realised less often but were largely restricted to target related contexts rarely appearing as error phonology. The findings are explained within a lexical activation network with the proposal that the resting levels of phoneme nodes are frequency sensitive. Predictions for the recovery of jargon aphasia and suggestions for future investigations are made.
Error detection and correction for a multiple frequency quaternary phase shift keyed signal
NASA Astrophysics Data System (ADS)
Hopkins, Kevin S.
1989-06-01
A multiple frequency quaternary phased shift (MFQPSK) signaling system was developed and experimentally tested in a controlled environment. In order to insure that the quality of the received signal is such that information recovery is possible, error detection/correction (EDC) must be used. Various EDC coding schemes available are reviewed and their application to the MFQPSK signal system is analyzed. Hamming, Golay, Bose-Chaudhuri-Hocquenghem (BCH), Reed-Solomon (R-S) block codes as well as convolutional codes are presented and analyzed in the context of specific MFQPSK system parameters. A computer program was developed in order to compute bit error probabilities as a function of signal to noise ratio. Results demonstrate that various EDC schemes are suitable for the MFQPSK signal structure, and that significant performance improvements are possible with the use of certain error correction codes.
NASA Astrophysics Data System (ADS)
Xu, Lichao; Wan, Yongjian; Liu, Haitao; Wang, Jia
2016-10-01
Smoothing is a convenient and efficient way to restrain middle spatial frequency (MSF) errors. Based on the experience, lap diameter, rotation speed, lap pressure and the hardness of pitch layer are important to correcting MSF errors. Therefore, nine groups of experiments are designed with the orthogonal method to confirm the significance of the above parameters. Based on the Zhang's model, PV (Peak and Valley) and RMS (Root Mean Square) versus processing cycles are analyzed before and after smoothing. At the same time, the smoothing limit and smoothing rate for different parameters to correct MSF errors are analyzed. Combined with the deviation analysis, we distinguish between dominant and subordinate parameters, and find out the optimal combination and law of various parameters, so as to guide the further research and fabrication.
NASA Astrophysics Data System (ADS)
Wang, Ben; Zhang, Yimin D.; Qin, Si; Amin, Moeness G.
2016-05-01
In this paper, we propose a nonstationary jammer suppression method for GPS receivers when the signals are sparsely sampled. Missing data samples induce noise-like artifacts in the time-frequency (TF) distribution and ambiguity function of the received signals, which lead to reduced capability and degraded performance in jammer signature estimation and excision. In the proposed method, a data-dependent TF kernel is utilized to mitigate the artifacts and sparse reconstruction methods are then applied to obtain instantaneous frequency (IF) estimation of the jammers. In addition, an error tolerance of the IF estimate is applied is applied to achieve robust jammer suppression performance in the presence of IF estimation inaccuracy.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
Driving errors of learner teens: frequency, nature and their association with practice.
Durbin, Dennis R; Mirman, Jessica H; Curry, Allison E; Wang, Wenli; Fisher Thiel, Megan C; Schultheis, Maria; Winston, Flaura K
2014-11-01
Despite demonstrating basic vehicle operations skills sufficient to pass a state licensing test, novice teen drivers demonstrate several deficits in tactical driving skills during the first several months of independent driving. Improving our knowledge of the types of errors made by teen permit holders early in the learning process would assist in the development of novel approaches to driver training and resources for parent supervision. The purpose of the current analysis was to describe driving performance errors made by teens during the permit period, and to determine if there were differences in the frequency and type of errors made by teens: (1) in comparison to licensed, safe, and experienced adult drivers; (2) by teen and parent-supervisor characteristics; and (3) by teen-reported quantity of practice driving. Data for this analysis were combined from two studies: (1) the control group of teens in a randomized clinical trial evaluating an intervention to improve parent-supervised practice driving (n=89 parent-teen dyads) and (2) a sample of 37 adult drivers (mean age 44.2 years), recruited and screened as an experienced and competent reference standard in a validation study of an on-road driving assessment for teens (tODA). Three measures of performance: drive termination (i.e., the assessment was discontinued for safety reasons), safety-relevant critical errors, and vehicle operation errors were evaluated at the approximate mid-point (12 weeks) and end (24 weeks) of the learner phase. Differences in driver performance were compared using the Wilcoxon rank sum test for continuous variables and Pearson's Chi-square test for categorical variables. 10.4% of teens had their early assessment terminated for safety reasons and 15.4% had their late assessment terminated, compared to no adults. These teens reported substantially fewer behind the wheel practice hours compared with teens that did not have their assessments terminated: tODAearly (9.0 vs. 20.0, p<0
Watts, Raymond G; Parsons, Kerry
2013-08-01
Chemotherapy medication errors occur in all cancer treatment programs. Such errors have potential severe consequences: either enhanced toxicity or impaired disease control. Understanding and limiting chemotherapy errors are imperative. A multi-disciplinary team developed and implemented a prospective pharmacy surveillance system of chemotherapy prescribing and administration errors from 2008 to 2011 at a Children's Oncology Group-affiliated, pediatric cancer treatment program. Every chemotherapy order was prospectively reviewed for errors at the time of order submission. All chemotherapy errors were graded using standard error severity codes. Error rates were calculated by number of patient encounters and chemotherapy doses dispensed. Process improvement was utilized to develop techniques to minimize errors with a goal of zero errors reaching the patient. Over the duration of the study, more than 20,000 chemotherapy orders were reviewed. Error rates were low (6/1,000 patient encounters and 3.9/1,000 medications dispensed) at the start of the project and reduced by 50% to 3/1,000 patient encounters and 1.8/1,000 medications dispensed during the initiative. Error types included chemotherapy dosing or prescribing errors (42% of errors), treatment roadmap errors (26%), supportive care errors (15%), timing errors (12%), and pharmacy dispensing errors (4%). Ninety-two percent of errors were intercepted before reaching the patient. No error caused identified patient harm. Efforts to lower rates were successful but have not succeeded in preventing all errors. Chemotherapy medication errors are possibly unavoidable, but can be minimized by thoughtful, multispecialty review of current policies and procedures. Pediatr Blood Cancer 2013;601320-1324. © 2013 Wiley Periodicals, Inc. Copyright © 2013 Wiley Periodicals, Inc.
A Reduced-frequency Approach for Calculating Dynamic Derivatives
NASA Technical Reports Server (NTRS)
Murman, Scott M.
2005-01-01
Computational Fluid Dynamics (CFD) is increasingly being used to both augment and create an aerodynamic performance database for aircraft configurations. This aerodynamic database contains the response of the aircraft to varying flight conditions and control surface deflections. The current work presents a novel method for calculating dynamic stability derivatives which reduces the computational cost over traditional unsteady CFD approaches by an order of magnitude, while still being applicable to arbitrarily complex geometries over a wide range of flow regimes. The primary thesis of this work is that the response to a forced motion can often be represented with a small, predictable number of frequency components without loss of accuracy. By resolving only those frequencies of interest, the computational effort is significantly reduced so that the routine calculation of dynamic derivatives becomes practical. The current implementation uses this same non-linear, frequency-domain approach and extends the application to the 3-D Euler equations. The current work uses a Cartesian, embedded-boundary method to automate the generation of dynamic stability derivatives.
Fractional derivatives: Probability interpretation and frequency response of rational approximations
NASA Astrophysics Data System (ADS)
Tenreiro Machado, J. A.
2009-09-01
The theory of fractional calculus (FC) is a useful mathematical tool in many applied sciences. Nevertheless, only in the last decades researchers were motivated for the adoption of the FC concepts. There are several reasons for this state of affairs, namely the co-existence of different definitions and interpretations, and the necessity of approximation methods for the real time calculation of fractional derivatives (FDs). In a first part, this paper introduces a probabilistic interpretation of the fractional derivative based on the Grünwald-Letnikov definition. In a second part, the calculation of fractional derivatives through Padé fraction approximations is analyzed. It is observed that the probabilistic interpretation and the frequency response of fraction approximations of FDs reveal a clear correlation between both concepts.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-21
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ∼ 2°, than those from the three empirical models with averaged errors > ∼ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
NASA Astrophysics Data System (ADS)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
ERIC Educational Resources Information Center
Jones, Gary; Tamburelli, Marco; Watson, Sarah E.; Gobet, Fernand; Pine, Julian M.
2010-01-01
Purpose: Deficits in phonological working memory and deficits in phonological processing have both been considered potential explanatory factors in specific language impairment (SLI). Manipulations of the lexicality and phonotactic frequency of nonwords enable contrasting predictions to be derived from these hypotheses. Method: Eighteen typically…
The use of neural networks in identifying error sources in satellite-derived tropical SST estimates.
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.
NASA Astrophysics Data System (ADS)
Birch, Gabriel C.; Griffin, John C.
2015-07-01
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. Using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.
NASA Astrophysics Data System (ADS)
Yucelen, Tansel; De La Torre, Gerardo; Johnson, Eric N.
2014-11-01
Although adaptive control theory offers mathematical tools to achieve system performance without excessive reliance on dynamical system models, its applications to safety-critical systems can be limited due to poor transient performance and robustness. In this paper, we develop an adaptive control architecture to achieve stabilisation and command following of uncertain dynamical systems with improved transient performance. Our framework consists of a new reference system and an adaptive controller. The proposed reference system captures a desired closed-loop dynamical system behaviour modified by a mismatch term representing the high-frequency content between the uncertain dynamical system and this reference system, i.e., the system error. In particular, this mismatch term allows the frequency content of the system error dynamics to be limited, which is used to drive the adaptive controller. It is shown that this key feature of our framework yields fast adaptation without incurring high-frequency oscillations in the transient performance. We further show the effects of design parameters on the system performance, analyse closeness of the uncertain dynamical system to the unmodified (ideal) reference system, discuss robustness of the proposed approach with respect to time-varying uncertainties and disturbances, and make connections to gradient minimisation and classical control theory. A numerical example is provided to demonstrate the efficacy of the proposed architecture.
On the errors in molecular dipole moments derived from accurate diffraction data.
Coppens; Volkov; Abramov; Koritsanszky
1999-09-01
The error in the molecular dipole moment as derived from accurate X-ray diffraction data is shown to be origin dependent in the general case. It is independent of the choice of origin if an electroneutrality constraint is introduced, even when additional constraints are applied to the monopole populations. If a constraint is not applied to individual moieties, as is appropriate for multicomponent crystals or crystals containing molecular ions, the geometric center of the entity considered is a suitable choice of origin for the error treatment.
Error and bias in under-5 mortality estimates derived from birth histories with small sample sizes.
Dwyer-Lindgren, Laura; Gakidou, Emmanuela; Flaxman, Abraham; Wang, Haidong
2013-07-26
Estimates of under-5 mortality at the national level for countries without high-quality vital registration systems are routinely derived from birth history data in censuses and surveys. Subnational or stratified analyses of under-5 mortality could also be valuable, but the usefulness of under-5 mortality estimates derived from birth histories from relatively small samples of women is not known. We aim to assess the magnitude and direction of error that can be expected for estimates derived from birth histories with small samples of women using various analysis methods. We perform a data-based simulation study using Demographic and Health Surveys. Surveys are treated as populations with known under-5 mortality, and samples of women are drawn from each population to mimic surveys with small sample sizes. A variety of methods for analyzing complete birth histories and one method for analyzing summary birth histories are used on these samples, and the results are compared to corresponding true under-5 mortality. We quantify the expected magnitude and direction of error by calculating the mean error, mean relative error, mean absolute error, and mean absolute relative error. All methods are prone to high levels of error at the smallest sample size with no method performing better than 73% error on average when the sample contains 10 women. There is a high degree of variation in performance between the methods at each sample size, with methods that contain considerable pooling of information generally performing better overall. Additional stratified analyses suggest that performance varies for most methods according to the true level of mortality and the time prior to survey. This is particularly true of the summary birth history method as well as complete birth history methods that contain considerable pooling of information across time. Performance of all birth history analysis methods is extremely poor when used on very small samples of women, both in terms of
Martin, D.L.
1992-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.
Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint
Florita, A.; Hodge, B. M.; Milligan, M.
2012-08-01
The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.
Error Probability of MRC in Frequency Selective Nakagami Fading in the Presence of CCI and ACI
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sum, Chin-Sean; Funada, Ryuhei; Sasaki, Shigenobu; Baykas, Tuncer; Wang, Junyi; Harada, Hiroshi; Kato, Shuzo
An exact expression of error rate is developed for maximal ratio combining (MRC) in an independent but not necessarily identically distributed frequency selective Nakagami fading channel taking into account inter-symbol, co-channel and adjacent channel interferences (ISI, CCI and ACI respectively). The characteristic function (CF) method is adopted. While accurate analysis of MRC performance cannot be seen in frequency selective channel taking ISI (and CCI) into account, such analysis for ACI has not been addressed yet. The general analysis presented in this paper solves a problem of past and present interest, which has so far been studied either approximately or in simulations. The exact method presented also lets us obtain an approximate error rate expression based on Gaussian approximation (GA) of the interferences. It is shown, especially while the channel is lightly faded, has fewer multipath components and a decaying delay profile, the GA may be substantially inaccurate at high signal-to-noise ratio. However, the exact results also reveal an important finding that there is a range of parameters where the simpler GA is reasonably accurate and hence, we don't have to go for more involved exact expression.
Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum
NASA Astrophysics Data System (ADS)
Orus Perez, Raul
2017-04-01
For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.
Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum
NASA Astrophysics Data System (ADS)
Orus Perez, Raul
2016-11-01
For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.
NASA Astrophysics Data System (ADS)
Nakamura, Satoshi; Goto, Hayato; Kujiraoka, Mamiko; Ichimura, Kouichi
2016-12-01
We propose a scheme for frequency-domain quantum computation (FDQC) in which the errors due to crosstalk are suppressed using extra physical systems coupled to a cavity. FDQC is a promising method to realize large-scale quantum computation, but crosstalk is a major problem. When physical systems employed as qubits satisfy specific resonance conditions, gate errors due to crosstalk increase. In our scheme, the errors are suppressed by controlling the resonance conditions using extra physical systems.
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
NASA Astrophysics Data System (ADS)
Kilifarska, N. A.
There are some models that describe the spatial distribution of greatest frequency yielding reflection from the F2 ionospheric layer (foF2). However, the distribution of the models' errors over the globe and how they depend on seasons, solar activity, etc., are unknown till this time. So the aim of the present paper is to compare the accuracy in describing the latitudinal and longitudinal variation of the mid-latitude maximum electron density, of CCIR, URSI, and a new created theoretical model. A comparison between the above mentioned models and all available from Boulder's data bank VI data (among 35 deg and 70 deg) have been made. Data for three whole years with different solar activity - 1976 (F_10.7 = 73.6), 1981 (F_10.7 = 20.6), 1983 (F_10.7 = 119.6) have been compared. The final results show that: 1. the areas with greatest and smallest errors depend on UT, season and solar activity; 2. the error distribution of CCIR and URSI models are very similar and are not coincident with these ones of theoretical model. The last result indicates that the theoretical model, described briefly bellow, may be a real alternative to the empirical CCIR and URSI models. The different spatial distribution of the models' errors gives a chance for the users to choose the most appropriate model, depending on their needs. Taking into account that the theoretical models have equal accuracy in region with many or without any ionosonde station, this result shows that our model can be used to improve the global mapping of the mid-latitude ionosphere. Moreover, if Re values of the input aeronomical parameters (neutral composition, temperatures and winds), are used - it may be expected that this theoretical model can be applied for Re or almost Re-time mapping of the main ionospheric parameters (foF2 and hmF2).
ERIC Educational Resources Information Center
Ramsey, Robert J.; Frank, James
2007-01-01
Drawing on a sample of 798 Ohio criminal justice professionals (police, prosecutors, defense attorneys, judges), the authors examine respondents' perceptions regarding the frequency of system errors (i.e., professional error and misconduct suggested by previous research to be associated with wrongful conviction), and wrongful felony conviction.…
Estimates of ocean forecast error covariance derived from Hessian Singular Vectors
NASA Astrophysics Data System (ADS)
Smith, Kevin D.; Moore, Andrew M.; Arango, Hernan G.
2015-05-01
Experience in numerical weather prediction suggests that singular value decomposition (SVD) of a forecast can yield useful a priori information about the growth of forecast errors. It has been shown formally that SVD using the inverse of the expected analysis error covariance matrix to define the norm at initial time yields the Empirical Orthogonal Functions (EOFs) of the forecast error covariance matrix at the final time. Because of their connection to the 2nd derivative of the cost function in 4-dimensional variational (4D-Var) data assimilation, the initial time singular vectors defined in this way are often referred to as the Hessian Singular Vectors (HSVs). In the present study, estimates of ocean forecast errors and forecast error covariance were computed using SVD applied to a baroclinically unstable temperature front in a re-entrant channel using the Regional Ocean Modeling System (ROMS). An identical twin approach was used in which a truth run of the model was sampled to generate synthetic hydrographic observations that were then assimilated into the same model started from an incorrect initial condition using 4D-Var. The 4D-Var system was run sequentially, and forecasts were initialized from each ocean analysis. SVD was performed on the resulting forecasts to compute the HSVs and corresponding EOFs of the expected forecast error covariance matrix. In this study, a reduced rank approximation of the inverse expected analysis error covariance matrix was used to compute the HSVs and EOFs based on the Lanczos vectors computed during the 4D-Var minimization of the cost function. This has the advantage that the entire spectrum of HSVs and EOFs in the reduced space can be computed. The associated singular value spectrum is found to yield consistent and reliable estimates of forecast error variance in the space spanned by the EOFs. In addition, at long forecast lead times the resulting HSVs and companion EOFs are able to capture many features of the actual
Singh, Hardeep; Meyer, Ashley N D; Thomas, Eric J
2014-01-01
Background The frequency of outpatient diagnostic errors is challenging to determine due to varying error definitions and the need to review data across multiple providers and care settings over time. We estimated the frequency of diagnostic errors in the US adult population by synthesising data from three previous studies of clinic-based populations that used conceptually similar definitions of diagnostic error. Methods Data sources included two previous studies that used electronic triggers, or algorithms, to detect unusual patterns of return visits after an initial primary care visit or lack of follow-up of abnormal clinical findings related to colorectal cancer, both suggestive of diagnostic errors. A third study examined consecutive cases of lung cancer. In all three studies, diagnostic errors were confirmed through chart review and defined as missed opportunities to make a timely or correct diagnosis based on available evidence. We extrapolated the frequency of diagnostic error obtained from our studies to the US adult population, using the primary care study to estimate rates of diagnostic error for acute conditions (and exacerbations of existing conditions) and the two cancer studies to conservatively estimate rates of missed diagnosis of colorectal and lung cancer (as proxies for other serious chronic conditions). Results Combining estimates from the three studies yielded a rate of outpatient diagnostic errors of 5.08%, or approximately 12 million US adults every year. Based upon previous work, we estimate that about half of these errors could potentially be harmful. Conclusions Our population-based estimate suggests that diagnostic errors affect at least 1 in 20 US adults. This foundational evidence should encourage policymakers, healthcare organisations and researchers to start measuring and reducing diagnostic errors. PMID:24742777
Singh, Hardeep; Meyer, Ashley N D; Thomas, Eric J
2014-09-01
The frequency of outpatient diagnostic errors is challenging to determine due to varying error definitions and the need to review data across multiple providers and care settings over time. We estimated the frequency of diagnostic errors in the US adult population by synthesising data from three previous studies of clinic-based populations that used conceptually similar definitions of diagnostic error. Data sources included two previous studies that used electronic triggers, or algorithms, to detect unusual patterns of return visits after an initial primary care visit or lack of follow-up of abnormal clinical findings related to colorectal cancer, both suggestive of diagnostic errors. A third study examined consecutive cases of lung cancer. In all three studies, diagnostic errors were confirmed through chart review and defined as missed opportunities to make a timely or correct diagnosis based on available evidence. We extrapolated the frequency of diagnostic error obtained from our studies to the US adult population, using the primary care study to estimate rates of diagnostic error for acute conditions (and exacerbations of existing conditions) and the two cancer studies to conservatively estimate rates of missed diagnosis of colorectal and lung cancer (as proxies for other serious chronic conditions). Combining estimates from the three studies yielded a rate of outpatient diagnostic errors of 5.08%, or approximately 12 million US adults every year. Based upon previous work, we estimate that about half of these errors could potentially be harmful. Our population-based estimate suggests that diagnostic errors affect at least 1 in 20 US adults. This foundational evidence should encourage policymakers, healthcare organisations and researchers to start measuring and reducing diagnostic errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
On the uncertainty of stream networks derived from elevation data: the error propagation approach
NASA Astrophysics Data System (ADS)
Hengl, T.; Heuvelink, G. B. M.; van Loon, E. E.
2010-01-01
DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise - usually areas of low local relief, slightly concave. In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy a required accuracy level. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small to moderate data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the http://www.geomorphometry.org/ website and can be easily adopted/adjusted to any similar
NASA Technical Reports Server (NTRS)
Kaufmann, D. C.
1976-01-01
The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.
Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker.
Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun
2016-10-12
The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers' attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers' attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers' LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95'', 25.14'', 82.43''], 3σ to [16.12'', 15.89'', 53.27''], 3σ.
Modeling work zone crash frequency by quantifying measurement errors in work zone length.
Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet
2013-06-01
Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence.
Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker
Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun
2016-01-01
The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers’ attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers’ attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers’ LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95′′, 25.14′′, 82.43′′], 3σ to [16.12′′, 15.89′′, 53.27′′], 3σ. PMID:27754320
Schulz, Christian M; Burden, Amanda; Posner, Karen L; Mincer, Shawn L; Steadman, Randolph; Wagner, Klaus J; Domino, Karen B
2017-08-01
Situational awareness errors may play an important role in the genesis of patient harm. The authors examined closed anesthesia malpractice claims for death or brain damage to determine the frequency and type of situational awareness errors. Surgical and procedural anesthesia death and brain damage claims in the Anesthesia Closed Claims Project database were analyzed. Situational awareness error was defined as failure to perceive relevant clinical information, failure to comprehend the meaning of available information, or failure to project, anticipate, or plan. Patient and case characteristics, primary damaging events, and anesthesia payments in claims with situational awareness errors were compared to other death and brain damage claims from 2002 to 2013. Anesthesiologist situational awareness errors contributed to death or brain damage in 198 of 266 claims (74%). Respiratory system damaging events were more common in claims with situational awareness errors (56%) than other claims (21%, P < 0.001). The most common specific respiratory events in error claims were inadequate oxygenation or ventilation (24%), difficult intubation (11%), and aspiration (10%). Payments were made in 85% of situational awareness error claims compared to 46% in other claims (P = 0.001), with no significant difference in payment size. Among 198 claims with anesthesia situational awareness error, perception errors were most common (42%), whereas comprehension errors (29%) and projection errors (29%) were relatively less common. Situational awareness error definitions were operationalized for reliable application to real-world anesthesia cases. Situational awareness errors may have contributed to catastrophic outcomes in three quarters of recent anesthesia malpractice claims.Situational awareness errors resulting in death or brain damage remain prevalent causes of malpractice claims in the 21st century.
Lexical Frequency and Third-Graders' Stress Accuracy in Derived English Word Production
ERIC Educational Resources Information Center
Jarmulowicz, Linda; Taran, Valentina L.; Hay, Sarah E.
2008-01-01
This study examined the effects of lexical frequency on children's production of accurate primary stress in words derived with nonneutral English suffixes. Forty-four third-grade children participated in an elicited derived word task in which they produced high-frequency, low-frequency, and nonsense-derived words with stress-changing suffixes…
Lexical Frequency and Third-Graders' Stress Accuracy in Derived English Word Production
ERIC Educational Resources Information Center
Jarmulowicz, Linda; Taran, Valentina L.; Hay, Sarah E.
2008-01-01
This study examined the effects of lexical frequency on children's production of accurate primary stress in words derived with nonneutral English suffixes. Forty-four third-grade children participated in an elicited derived word task in which they produced high-frequency, low-frequency, and nonsense-derived words with stress-changing suffixes…
Fukui, Y; Matsubara, M; Akane, A; Hama, K; Matsubara, K; Takahashi, S
1985-01-01
The cause for discrepancies in results from different methods of the carboxyhemoglobin (HbCO) analysis on the blood from bodies of burn victims was investigated. Blood samples with 0, 50, and 100% carbon monoxide (CO) saturation were heated at various temperatures for some time and then analyzed. Carboxyhemoglobin content was determined by the fourth-derivative spectrophotometric method and compared with results from the usual two-wavelength method. For total hemoglobin measurement, the fourth-derivative technique and cyanmethemoglobin method were used. Turbidity in blood samples, which occurred when samples were heated above 50 degrees C, affected the analysis. At about 70 degrees C, coagulation and hemoglobin degeneration occurred accelerating the errors of determined values. The fourth-derivative technique, however, proved to be independent of the turbidity and would be useful for the analysis on the blood without hemoglobin degeneration.
NASA Astrophysics Data System (ADS)
Higuchi, Masato; Vu, Thanh-Tung; Aketagawa, Masato
2016-11-01
The conventional method of measuring the radial, axial and angular spindle motion is complicated and needs large spaces. Smaller instrument is better in terms of accurate and practical measurement. A method of measuring spindle error motion using a sinusoidal phase modulation and a concentric circle grating was described in the past. In the method, the concentric circle grating with fine pitch is attached to the spindle. Three optical sensors are fixed under grating and observe appropriate position of grating. The each optical sensor consists of a sinusoidal frequency modulated semiconductor laser as the light source, and two interferometers. One interferometer measures an axial spindle motion by detecting the interference fringe between reflected beam from fixed mirror and 0th-order diffracted beam. Another interferometer measures a radial spindle motion by detecting the interference fringe between ±2nd-order diffracted beams. With these optical sensor, 3 axial and 3 radial displacement of grating can be measured. From these measured displacements, axial, radial and angular spindle motion is calculated concurrently. In the previous experiment, concurrent measurement of the one axial and one radial spindle displacement at 4rpm was described. In this paper, the sinusoidal frequency modulation realized by modulating injection current is used instead of the sinusoidal phase modulation, which contributes simplicity of the instrument. Furthermore, concurrent measurement of the 5 axis (1 axial, 2 radial and 2 angular displacements) spindle motion at 4000rpm may be described.
Frequency domain analysis of errors in cross-correlations of ambient seismic noise
NASA Astrophysics Data System (ADS)
Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri
2016-12-01
We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method
On the uncertainty of stream networks derived from elevation data: the error propagation approach
NASA Astrophysics Data System (ADS)
Hengl, T.; Heuvelink, G. B. M.; van Loon, E. E.
2010-07-01
DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise - usually areas of low local relief and slightly convex (0-10 difference from the mean value). In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy the required accuracy level. Such error propagation tool should become a standard functionality in any modern GIS. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the
Assessment of errors in Precipitable Water data derived from Global Navigation Satellite System observations
NASA Astrophysics Data System (ADS)
Hordyniec, Pawel; Bosy, Jaroslaw; Rohm, Witold
2015-07-01
Among the new remote sensing techniques, one of the most promising is a GNSS meteorology, which provides continuous remote monitoring of the troposphere water vapor in all weather conditions with high temporal and spatial resolution. The Continuously Operating Reference Station (CORS) network and available meteorological instrumentation and models were scrutinized (we based our analysis on ASG-EUPOS network in Poland) as a troposphere water vapor retrieval system. This paper shows rigorous mathematical derivation of Precipitable Water errors based on uncertainties propagation method using all available data source quality measures (meteorological sensors and models precisions, ZTD estimation error, interpolation discrepancies, and ZWD to PW conversion inaccuracies). We analyze both random and systematic errors introduced by indirect measurements and interpolation procedures, hence estimate the PW system integrity capabilities. The results for PW show that the systematic errors can be under half-millimeter level as long as pressure and temperature are measured at the observation site. In other case, i.e. no direct observations, numerical weather model fields (we used in this study Coupled Ocean Atmospheric Mesoscale Prediction System) serves as the most accurate source of data. Investigated empirical pressure and temperature models, such as GPT2, GPT, UNB3m and Berg introduced into WV retrieval system, combined bias and random errors exceeding PW standard level of accuracy (3 mm according to E-GVAP report). We also found that the pressure interpolation procedure is introducing over 0.5 hPa bias and 1 hPa standard deviation into the system (important in Zenith Total Delay reduction) and hence has negative impact on the WV estimation quality.
Frequency, Types, and Potential Clinical Significance of Medication-Dispensing Errors
Bohand, Xavier; Simon, Laurent; Perrier, Eric; Mullot, Hélène; Lefeuvre, Leslie; Plotton, Christian
2009-01-01
INTRODUCTION AND OBJECTIVES: Many dispensing errors occur in the hospital, and these can endanger patients. The purpose of this study was to assess the rate of dispensing errors by a unit dose drug dispensing system, to categorize the most frequent types of errors, and to evaluate their potential clinical significance. METHODS: A prospective study using a direct observation method to detect medication-dispensing errors was used. From March 2007 to April 2007, “errors detected by pharmacists” and “errors detected by nurses” were recorded under six categories: unauthorized drug, incorrect form of drug, improper dose, omission, incorrect time, and deteriorated drug errors. The potential clinical significance of the “errors detected by nurses” was evaluated. RESULTS: Among the 734 filled medication cassettes, 179 errors were detected corresponding to a total of 7249 correctly fulfilled and omitted unit doses. An overall error rate of 2.5% was found. Errors detected by pharmacists and nurses represented 155 (86.6%) and 24 (13.4%) of the 179 errors, respectively. The most frequent types of errors were improper dose (n = 57, 31.8%) and omission (n = 54, 30.2%). Nearly 45% of the 24 errors detected by nurses had the potential to cause a significant (n = 7, 29.2%) or serious (n = 4, 16.6%) adverse drug event. CONCLUSIONS: Even if none of the errors reached the patients in this study, a 2.5% error rate indicates the need for improving the unit dose drug-dispensing system. Furthermore, it is almost certain that this study failed to detect some medication errors, further arguing for strategies to prevent their recurrence. PMID:19142545
Frequency, types, and potential clinical significance of medication-dispensing errors.
Bohand, Xavier; Simon, Laurent; Perrier, Eric; Mullot, Hélène; Lefeuvre, Leslie; Plotton, Christian
2009-01-01
Many dispensing errors occur in the hospital, and these can endanger patients. The purpose of this study was to assess the rate of dispensing errors by a unit dose drug dispensing system, to categorize the most frequent types of errors, and to evaluate their potential clinical significance. A prospective study using a direct observation method to detect medication-dispensing errors was used. From March 2007 to April 2007, 'errors detected by pharmacists' and 'errors detected by nurses' were recorded under six categories: unauthorized drug, incorrect form of drug, improper dose, omission, incorrect time, and deteriorated drug errors. The potential clinical significance of the 'errors detected by nurses' was evaluated. Among the 734 filled medication cassettes, 179 errors were detected corresponding to a total of 7249 correctly fulfilled and omitted unit doses. An overall error rate of 2.5% was found. Errors detected by pharmacists and nurses represented 155 (86.6%) and 24 (13.4%) of the 179 errors, respectively. The most frequent types of errors were improper dose (n = 57, 31.8%) and omission (n = 54, 30.2%). Nearly 45% of the 24 errors detected by nurses had the potential to cause a significant (n = 7, 29.2%) or serious (n = 4, 16.6%) adverse drug event. Even if none of the errors reached the patients in this study, a 2.5% error rate indicates the need for improving the unit dose drug-dispensing system. Furthermore, it is almost certain that this study failed to detect some medication errors, further arguing for strategies to prevent their recurrence.
Kramer, Emily B; Farabaugh, Philip J
2007-01-01
Estimates of missense error rates (misreading) during protein synthesis vary from 10(-3) to 10(-4) per codon. The experiments reporting these rates have measured several distinct errors using several methods and reporter systems. Variation in reported rates may reflect real differences in rates among the errors tested or in sensitivity of the reporter systems. To develop a more accurate understanding of the range of error rates, we developed a system to quantify the frequency of every possible misreading error at a defined codon in Escherichia coli. This system uses an essential lysine in the active site of firefly luciferase. Mutations in Lys529 result in up to a 1600-fold reduction in activity, but the phenotype varies with amino acid. We hypothesized that residual activity of some of the mutant genes might result from misreading of the mutant codons by tRNA(Lys) (UUUU), the cognate tRNA for the lysine codons, AAA and AAG. Our data validate this hypothesis and reveal details about relative missense error rates of near-cognate codons. The error rates in E. coli do, in fact, vary widely. One source of variation is the effect of competition by cognate tRNAs for the mutant codons; higher error frequencies result from lower competition from low-abundance tRNAs. We also used the system to study the effect of ribosomal protein mutations known to affect error rates and the effect of error-inducing antibiotics, finding that they affect misreading on only a subset of near-cognate codons and that their effect may be less general than previously thought.
Melnychuk, O.; Grassellino, A.; Romanenko, A.
2014-12-19
In this paper, we discuss error analysis for intrinsic quality factor (Q₀) and accelerating gradient (Eacc ) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24–27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab,more » we estimated total uncertainty for both Q₀ and Eacc to be at the level of approximately 4% for input coupler coupling parameter β₁ in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q₀ uncertainty increases (decreases) with β₁ whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24–27], is independent of β₁. Overall, our estimated Q₀ uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24–27].« less
Melnychuk, O; Grassellino, A; Romanenko, A
2014-12-01
In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].
NASA Astrophysics Data System (ADS)
Melnychuk, O.; Grassellino, A.; Romanenko, A.
2014-12-01
In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].
Nie, Xuqing; Li, Shengyi; Shi, Feng; Hu, Hao
2014-02-20
The smoothing effect of the rigid lap plays an important role in controlling midspatial frequency errors (MSFRs). At present, the pressure distribution between the polishing pad and processed surface is mainly calculated by Mehta's bridging model. However, this classic model does not work for the irregular MSFR. In this paper, a generalized numerical model based on the finite element method (FEM) is proposed to solve this problem. First, the smoothing polishing (SP) process is transformed to a 3D elastic structural FEM model, and the governing matrix equation is gained. By virtue of the boundary conditions applied to the governing matrix equation, the nodal displacement vector and nodal force vector of the pad can be attained, from which the pressure distribution can be extracted. In the partial contact condition, the iterative method is needed. The algorithmic routine is shown, and the applicability of the generalized numerical model is discussed. The detailed simulation is given when the lap is in contact with the irregular surface of different morphologies. A well-designed SP experiment is conducted in our lab to verify the model. A small difference between the experimental data and simulated result shows that the model is totally practicable. The generalized numerical model is applied on a Φ500 mm parabolic surface. The calculated result and measured data after the SP process have been compared, which indicates that the model established in this paper is an effective method to predict the SP process.
Gorbach, Christy; Blanton, Linda; Lukawski, Beverly A; Varkey, Alex C; Pitman, E Paige; Garey, Kevin W
2015-09-01
The frequency of and risk factors for medication errors by pharmacists during order verification in a tertiary care medical center were reviewed. This retrospective, secondary database study was conducted at a large tertiary care medical center in Houston, Texas. Inpatient and outpatient medication orders and medication errors recorded between July 1, 2011, and June 30, 2012, were reviewed. Independent variables assessed as risk factors for medication errors included workload (mean number of orders verified per pharmacist per shift), work environment (type of day, type of shift, and mean number of pharmacists per shift), and nonmodifiable characteristics of the pharmacist (type of pharmacy degree obtained, age, number of years practicing, and number of years at the institution). A total of 1,887,751 medication orders, 92 medication error events, and 50 pharmacists were included in the study. The overall error rate was 4.87 errors per 100,000 verified orders. An increasing medication error rate was associated with an increased number of orders verified per pharmacist (p = 0.007), the type of shift (p = 0.021), the type of day (p = 0.002), and the mean number of pharmacists per shift (p = 0.001). Pharmacist demographic variables were not associated with risk of error. The number of orders per shift was identified as a significant independent risk factor for medication errors (p = 0.019). An increase in the number of orders verified per shift was associated with an increased rate of pharmacist errors during order verification in a tertiary care medical center. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Branciard, Cyril
2014-02-01
The quantification of the "measurement uncertainty"aspect of Heisenberg's uncertainty principle—that is, the study of trade-offs between accuracy and disturbance, or between accuracies in an approximate joint measurement on two incompatible observables—has regained a lot of interest recently. Several approaches have been proposed and debated. In this paper we consider Ozawa's definitions for inaccuracies (as root-mean-square errors) in approximate joint measurements, and study how these are constrained in different cases, whether one specifies certain properties of the approximations—namely their standard deviations and/or their bias—or not. Extending our previous work [C. Branciard, Proc. Natl. Acad. Sci. USA 110, 6742 (2013), 10.1073/pnas.1219331110], we derive error-trade-off relations, which we prove to be tight for pure states. We show explicitly how all previously known relations for Ozawa's inaccuracies follow from ours. While our relations are in general not tight for mixed states, we show how these can be strengthened and how tight relations can still be obtained in that case.
Error estimation for ORION baseline vector determination
NASA Technical Reports Server (NTRS)
Wu, S. C.
1980-01-01
Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.
Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat
de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A.; Kölzsch, Andrea; Prins, Herbert H. T.; de Boer, W. Fred
2015-01-01
The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations
Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat.
de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A; Kölzsch, Andrea; Prins, Herbert H T; de Boer, W Fred
2015-01-01
The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations
NASA Astrophysics Data System (ADS)
Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul
2016-07-01
Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF
MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery
2016-04-01
The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process
Rieche, Marie; Komenský, Tomás; Husar, Peter
2011-01-01
Radio Frequency Identification (RFID) systems in healthcare facilitate the possibility of contact-free identification and tracking of patients, medical equipment and medication. Thereby, patient safety will be improved and costs as well as medication errors will be reduced considerably. However, the application of RFID and other wireless communication systems has the potential to cause harmful electromagnetic disturbances on sensitive medical devices. This risk mainly depends on the transmission power and the method of data communication. In this contribution we point out the reasons for such incidents and give proposals to overcome these problems. Therefore a novel modulation and transmission technique called Gaussian Derivative Frequency Modulation (GDFM) is developed. Moreover, we carry out measurements to show the inteference properties of different modulation schemes in comparison to our GDFM.
NASA Astrophysics Data System (ADS)
Duan, Beiping; Zheng, Zhoushun; Cao, Wen
2016-08-01
In this paper, we revisit two spectral approximations, including truncated approximation and interpolation for Caputo fractional derivative. The two approaches have been studied to approximate Riemann-Liouville (R-L) fractional derivative by Chen et al. and Zayernouri et al. respectively in their most recent work. For truncated approximation the reconsideration partly arises from the difference between fractional derivative in R-L sense and Caputo sense: Caputo fractional derivative requires higher regularity of the unknown than R-L version. Another reason for the reconsideration is that we distinguish the differential order of the unknown with the index of Jacobi polynomials, which is not presented in the previous work. Also we provide a way to choose the index when facing multi-order problems. By using generalized Hardy's inequality, the gap between the weighted Sobolev space involving Caputo fractional derivative and the classical weighted space is bridged, then the optimal projection error is derived in the non-uniformly Jacobi-weighted Sobolev space and the maximum absolute error is presented as well. For the interpolation, analysis of interpolation error was not given in their work. In this paper we build the interpolation error in non-uniformly Jacobi-weighted Sobolev space by constructing fractional inverse inequality. With combining collocation method, the approximation technique is applied to solve fractional initial-value problems (FIVPs). Numerical examples are also provided to illustrate the effectiveness of this algorithm.
Exploring the Derivative Suffix Frequency Effect in Spanish Speaking Children
ERIC Educational Resources Information Center
Lázaro, Miguel; Acha, Joana; de la Rosa, Saray; García, Seila; Sainz, Javier
2017-01-01
This study was designed to examine the developmental course of the suffix frequency effect and its role in the development of automatic morpho-lexical access. In Spanish, a highly transparent language from an orthographic point of view, this effect has been shown to be facilitative in adults, but the evidence with children is still inconclusive. A…
Exploring the Derivative Suffix Frequency Effect in Spanish Speaking Children
ERIC Educational Resources Information Center
Lázaro, Miguel; Acha, Joana; de la Rosa, Saray; García, Seila; Sainz, Javier
2017-01-01
This study was designed to examine the developmental course of the suffix frequency effect and its role in the development of automatic morpho-lexical access. In Spanish, a highly transparent language from an orthographic point of view, this effect has been shown to be facilitative in adults, but the evidence with children is still inconclusive. A…
Cui, Cunxing; Feng, Qibo; Zhang, Bin; Zhao, Yuqiong
2016-03-21
A novel method for simultaneously measuring six degree-of-freedom (6DOF) geometric motion errors is proposed in this paper, and the corresponding measurement instrument is developed. Simultaneous measurement of 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser is accomplished for the first time to the best of the authors' knowledge. Dual-frequency laser beams that are orthogonally linear polarized were adopted as the measuring datum. Positioning error measurement was achieved by heterodyne interferometry, and other 5DOF geometric motion errors were obtained by fiber collimation measurement. A series of experiments was performed to verify the effectiveness of the developed instrument. The experimental results showed that the stability and accuracy of the positioning error measurement are 31.1 nm and 0.5 μm, respectively. For the straightness error measurements, the stability and resolution are 60 and 40 nm, respectively, and the maximum deviation of repeatability is ± 0.15 μm in the x direction and ± 0.1 μm in the y direction. For pitch and yaw measurements, the stabilities are 0.03″ and 0.04″, the maximum deviations of repeatability are ± 0.18″ and ± 0.24″, and the accuracies are 0.4″ and 0.35″, respectively. The stability and resolution of roll measurement are 0.29″ and 0.2″, respectively, and the accuracy is 0.6″.
NASA Technical Reports Server (NTRS)
Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)
2001-01-01
Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A
The distribution of particulate matter (PM) concentrations has an impact on human health effects and the setting of PM regulations. Since PM is commonly sampled on less than daily schedules, the magnitude of sampling errors needs to be determined. Daily PM data from Spokane, W...
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo
2016-01-01
The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer. However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop size
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo
2016-01-01
The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer [1]). However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link [2]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop
Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error
2013-09-30
Structural Instability and Model Error Andrew J. Majda New York University Courant Institute of Mathematical Sciences 251 Mercer Street New York, NY...Majda and his DRI post doc Sapsis have achieved a potential major breakthrough with a new class of methods for UQ. Turbulent dynamical systems are...uncertain initial data. These key physical quantities are often characterized by the degrees of freedom which carry the largest energy or variance and
Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error
2012-09-30
Instability and Model Error Principal Investigator: Andrew J. Majda Institution: New York University Courant Institute of Mathematical ...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) New York University, Courant Institute of Mathematical ...for the Special Volume of Communications on Pure and Applied Mathematics for 75th Anniversary of the Courant Institute, April 12, 2012, doi: 10.1002
NASA Technical Reports Server (NTRS)
Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong
2011-01-01
MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.
Dry powder inhalers: which factors determine the frequency of handling errors?
Wieshammer, Siegfried; Dreyhaupt, Jens
2008-01-01
Dry powder inhalers are often used ineffectively, resulting in a poor level of disease control. To determine how often essential mistakes are made in the use of Aerolizer, Discus, HandiHaler and Turbuhaler and to study the effects of age, severity of airflow obstruction and previous training in inhalational technique by medical personnel on the error rate. Two hundred and twenty-four newly referred outpatients (age 55.1 +/- 20 years) were asked how they had been acquainted with the inhaler and to demonstrate their inhalational technique. The inhaler-specific error rates were as follows: Aerolizer 9.1%, Discus 26.7%, HandiHaler 53.1% and Turbuhaler 34.9%. Compared to Aerolizer, the odds ratio of an ineffective inhalation was higher for HandiHaler (9.82, p < 0.01) and Turbuhaler (4.84, p < 0.05). The error rate increased with age and with the severity of airway obstruction (p < 0.01). When training had been given as opposed to no training, the odds ratio of ineffective inhalation was 0.22 (p < 0.01). If Turbuhaler is used, the estimated risks range from 9.8% in an 18-year-old patient with normal lung function and previous training to 83.2% in an 80-year-old patient with moderate or severe obstruction who had not received any training. Dry powder inhalers are useful in the management of younger patients with normal lung function or mild airway obstruction. In older patients with advanced chronic obstructive pulmonary disease, the risk of ineffective inhalation remains high despite training in inhalational technique. A metered-dose inhaler with a spacer might be a valuable treatment alternative in a substantial proportion of these patients. (c) 2007 S. Karger AG, Basel.
Frequency Domain Errors in Variables Approach for Two Channel SIMO System Identification
2009-06-24
Signal et Image, ENSEIRB/UMR CNRS 5218 IMS Dpt. LAPS, Université Bordeaux 1, France william.bobillet@etu.u-bordeaux1.fr Dipartimento di Fisica e...without loss of generality . - - - ? h1(k) y1(k) b1(k), (σ21 ) x1(k) - - - ? h2(k) y2(k) b2(k), (σ22 ) x2(k) s(k) Figure 1: two-channel...developed in the fields of statistics and identification, assume that the available data are disturbed by additive error terms. Given a generic process
NASA Astrophysics Data System (ADS)
Ferreira, F.; Gendron, E.; Rousset, G.; Gratadour, D.
2016-07-01
The future European Extremely Large Telescope (E-ELT) adaptive optics (AO) systems will aim at wide field correction and large sky coverage. Their performance will be improved by using post processing techniques, such as point spread function (PSF) deconvolution. The PSF estimation involves characterization of the different error sources in the AO system. Such error contributors are difficult to estimate: simulation tools are a good way to do that. We have developed in COMPASS (COMputing Platform for Adaptive opticS Systems), an end-to-end simulation tool using GPU (Graphics Processing Unit) acceleration, an estimation tool that provides a comprehensive error budget by the outputs of a single simulation run.
Cumberland, Phillippa M.; Bao, Yanchun; Hysi, Pirro G.; Foster, Paul J.; Hammond, Christopher J.; Rahi, Jugnoo S.
2015-01-01
Purpose To report the methodology and findings of a large scale investigation of burden and distribution of refractive error, from a contemporary and ethnically diverse study of health and disease in adults, in the UK. Methods U K Biobank, a unique contemporary resource for the study of health and disease, recruited more than half a million people aged 40–69 years. A subsample of 107,452 subjects undertook an enhanced ophthalmic examination which provided autorefraction data (a measure of refractive error). Refractive error status was categorised using the mean spherical equivalent refraction measure. Information on socio-demographic factors (age, gender, ethnicity, educational qualifications and accommodation tenure) was reported at the time of recruitment by questionnaire and face-to-face interview. Results Fifty four percent of participants aged 40–69 years had refractive error. Specifically 27% had myopia (4% high myopia), which was more common amongst younger people, those of higher socio-economic status, higher educational attainment, or of White or Chinese ethnicity. The frequency of hypermetropia increased with age (7% at 40–44 years increasing to 46% at 65–69 years), was higher in women and its severity was associated with ethnicity (moderate or high hypermetropia at least 30% less likely in non-White ethnic groups compared to White). Conclusions Refractive error is a significant public health issue for the UK and this study provides contemporary data on adults for planning services, health economic modelling and monitoring of secular trends. Further investigation of risk factors is necessary to inform strategies for prevention. There is scope to do this through the planned longitudinal extension of the UK Biobank study. PMID:26430771
NASA Astrophysics Data System (ADS)
El-Dardiry, H. A.; Habib, E. H.
2014-12-01
Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the
Lower Bounds on the Frequency Estimation Error in Magnetically Coupled MEMS Resonant Sensors.
Paden, Brad E
2016-02-01
MEMS inductor-capacitor (LC) resonant pressure sensors have revolutionized the treatment of abdominal aortic aneurysms. In contrast to electrostatically driven MEMS resonators, these magnetically coupled devices are wireless so that they can be permanently implanted in the body and can communicate to an external coil via pressure-induced frequency modulation. Motivated by the importance of these sensors in this and other applications, this paper develops relationships among sensor design variables, system noise levels, and overall system performance. Specifically, new models are developed that express the Cramér-Rao lower bound for the variance of resonator frequency estimates in terms of system variables through a system of coupled algebraic equations, which can be used in design and optimization. Further, models are developed for a novel mechanical resonator in addition to the LC-type resonators.
Error correction coding for frequency-hopping multiple-access spread spectrum communication systems
NASA Technical Reports Server (NTRS)
Healy, T. J.
1982-01-01
A communication system which would effect channel coding for frequency-hopped multiple-access is described. It is shown that in theory coding can increase the spectrum utilization efficiency of a system with mutual interference to 100 percent. Various coding strategies are discussed and some initial comparisons are given. Some of the problems associated with implementing the type of system described here are discussed.
An analysis of perceptual errors in reading mammograms using quasi-local spatial frequency spectra.
Mello-Thoms, C; Dunn, S M; Nodine, C F; Kundel, H L
2001-09-01
In this pilot study the authors examined areas on a mammogram that attracted the visual attention of experienced mammographers and mammography fellows, as well as areas that were reported to contain a malignant lesion, and, based on their spatial frequency spectrum, they characterized these areas by the type of decision outcome that they yielded: true-positives (TP), false-positives (FP), true-negatives (TN), and false-negatives (FN). Five 2-view (craniocaudal and medial-lateral oblique) mammogram cases were examined by 8 experienced observers, and the eye position of the observers was tracked. The observers were asked to report the location and nature of any malignant lesions present in the case. The authors analyzed each area in which either the observer made a decision or in which the observer had prolonged (>1,000 ms) visual dwell using wavelet packets, and characterized these areas in terms of the energy contents of each spatial frequency band. It was shown that each decision outcome is characterized by a specific profile in the spatial frequency domain, and that these profiles are significantly different from one another. As a consequence of these differences, the profiles can be used to determine which type of decision a given observer will make when examining the area. Computer-assisted perception correctly predicted up to 64% of the TPs made by the observers, 77% of the FPs, and 70% of the TNs.
Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc
2015-01-01
This paper presents a methodology for the inverse identification of linearly viscoelastic material parameters in the context of steady-state dynamics using interior data. The inverse problem of viscoelasticity imaging is solved by minimizing a modified error in constitutive equation (MECE) functional, subject to the conservation of linear momentum. The treatment is applicable to configurations where boundary conditions may be partially or completely underspecified. The MECE functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, and also incorporates the measurement data in a quadratic penalty term. Regularization of the problem is achieved through a penalty parameter in combination with the discrepancy principle due to Morozov. Numerical results demonstrate the robust performance of the method in situations where the available measurement data is incomplete and corrupted by noise of varying levels. PMID:26388656
NASA Astrophysics Data System (ADS)
Anderson, K.; Dungan, J. L.
2008-12-01
vegetation. The grey panel data showed a wavelength- dependent pattern, similar to the NEdL laboratory trend, but subsequent error propagation of laboratory- derived NEdL through to a reflectance factor showed that the laboratory characterisation was unable to account for all of the uncertainty measured in the field. Therefore the estimate of u gained from field data more closely represents the reproducibility of measurements where atmospheric, solar zenith and instrument-related uncertainties are combined. Results on vegetation u showed a stronger wavelength dependency with higher standard uncertainties beyond the vegetation red-edge than in visible wavelengths (maximum = 0.015 at 800 nm, and 0.004 at 550nm). The results demonstrate that standard uncertainties of field reflectance data have a spectral dependence and exceed laboratory-derived estimates of instrument "noise". Uncertainty of this type must be taken into account when statistically testing for differences in field spectra. Improved reporting of standard uncertainties from field experiments will foster progress in remote sensing science.
NASA Astrophysics Data System (ADS)
Zhu, Jianqu; Jin, Weidong; Guo, Feng
2017-04-01
The stochastic resonance (SR) behavior for a linear oscillator with two kinds of fractional derivatives and random frequency is investigated. Based on linear system theory, and applying with the definition of the Gamma function and fractional derivatives, we derive the expression for the output amplitude gain (OAG). A stochastic multiresonance is found on the OAG curve versus the first kind of fractional derivative exponent. The SR occurs on the OAG as a function of the second kind of fractional exponent, as a function of the viscous damping and the friction coefficients, and as a function of the system's frequency. The bona fide SR also takes place on the OAG curve versus the driving frequency.
Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen
2014-01-20
As further application investigations on fixed abrasive diamond pellets (FADPs), this work exhibits their potential capability for diminishing mid-spatial frequency errors (MSFEs, i.e., periodic small structure) of optical surfaces. Benefitting from its high surficial rigidness, the FADPs tool has a natural smoothing effect to periodic small errors. Compared with the previous design, this proposed new tool employs more compliance to aspherical surfaces due to the pellets being mutually separated and bonded on a steel plate with elastic back of silica rubber adhesive. Moreover, a unicursal Peano-like path is presented for improving MSFEs, which can enhance the multidirectionality and uniformity of the tool's motion. Experiments were conducted to validate the effectiveness of FADPs for diminishing MSFEs. In the lapping of a Φ=420 mm Zerodur paraboloid workpiece, the grinding ripples were quickly diminished (210 min) by both visual inspection and profile metrology, as well as the power spectrum density (PSD) analysis, RMS was reduced from 4.35 to 0.55 μm. In the smoothing of a Φ=101 mm fused silica workpiece, MSFEs were obviously improved from the inspection of surface form maps, interferometric fringe patterns, and PSD analysis. The mid-spatial frequency RMS was diminished from 0.017λ to 0.014λ (λ=632.8 nm).
NASA Astrophysics Data System (ADS)
Ray, J.; Collilieux, X.; Rebischung, P.; van Dam, T. M.; Altamimi, Z.
2011-12-01
After applying corrections for surface load displacements to a set of station position time series determined using the Global Positioning System (GPS), we are able to infer precise error floors for the determinations of weekly dN, dE, and dU components. The load corrections are a combination of NCEP atmosphere, ECCO non-tidal ocean, and LDAS surface water models, after detrending and averaging to the middle of each GPS week. These load corrections have been applied to the most current station time series from the International GNSS Service (IGS) for a global set of 706 stations, each having more than 100 weekly observations. The stacking of the weekly IGS frame solutions has taken utmost care to minimize aliasing of local load signals into the frame parameters to ensure the most reliable time series of individual station motions. For the first time, dN and dE horizontal components have been considered together with the height (dU) variations. By examining the distributions of annual amplitudes versus WRMS scatters for all 706 stations and all three local components, we find an empirical error floor of about 0.65, 0.7, and 2.2 mm for weekly dN, dE, and dU. Only the very best performing GPS stations approach these floors. Most stations have larger scatters due to other non-load errors. These global error floors have been verified by studying differences for a subset of 119 station pairs located within 25 km of each other. Of these, 19 pairs share a common antenna, which permits an estimate of the fundamental electronic noise in the GPS estimates: 0.4, 0.4, and 1.3 mm for dN, dE, and dU. The remaining 100 close pairs that do not share an antenna include this noise component as well as errors due to multipath, equipment differences, data modeling, etc, but not due to loading or direct orbit effects since those are removed by the differencing. The WRMS dN, dE, and dU differences for these close pairs imply station error floors of 0.8, 0.9, and 2.1 mm, respectively
Fischer, Troy M; Gilmour, Arthur R; van der Werf, Julius H J
2004-01-01
Approximate standard errors (ASE) of variance components for random regression coefficients are calculated from the average information matrix obtained in a residual maximum likelihood procedure. Linear combinations of those coefficients define variance components for the additive genetic variance at given points of the trajectory. Therefore, ASE of these components and heritabilities derived from them can be calculated. In our example, the ASE were larger near the ends of the trajectory.
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.
A semiempirical error estimation technique for PWV derived from atmospheric radiosonde data
NASA Astrophysics Data System (ADS)
Castro-Almazán, Julio A.; Pérez-Jordán, Gabriel; Muñoz-Tuñón, Casiana
2016-09-01
A semiempirical method for estimating the error and optimum number of sampled levels in precipitable water vapour (PWV) determinations from atmospheric radiosoundings is proposed. Two terms have been considered: the uncertainties in the measurements and the sampling error. Also, the uncertainty has been separated in the variance and covariance components. The sampling and covariance components have been modelled from an empirical dataset of 205 high-vertical-resolution radiosounding profiles, equipped with Vaisala RS80 and RS92 sondes at four different locations: Güímar (GUI) in Tenerife, at sea level, and the astronomical observatory at Roque de los Muchachos (ORM, 2300 m a.s.l.) on La Palma (both on the Canary Islands, Spain), Lindenberg (LIN) in continental Germany, and Ny-Ålesund (NYA) in the Svalbard Islands, within the Arctic Circle. The balloons at the ORM were launched during intensive and unique site-testing runs carried out in 1990 and 1995, while the data for the other sites were obtained from radiosounding stations operating for a period of 1 year (2013-2014). The PWV values ranged between ˜ 0.9 and ˜ 41 mm. The method sub-samples the profile for error minimization. The result is the minimum error and the optimum number of levels. The results obtained in the four sites studied showed that the ORM is the driest of the four locations and the one with the fastest vertical decay of PWV. The exponential autocorrelation pressure lags ranged from 175 hPa (ORM) to 500 hPa (LIN). The results show a coherent behaviour with no biases as a function of the profile. The final error is roughly proportional to PWV whereas the optimum number of levels (N0) is the reverse. The value of N0 is less than 400 for 77 % of the profiles and the absolute errors are always < 0.6 mm. The median relative error is 2.0 ± 0.7 % and the 90th percentile P90 = 4.6 %. Therefore, whereas a radiosounding samples at least N0 uniform vertical levels, depending on the water
Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O
2016-11-01
Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.
Estimates of Mode-S EHS aircraft-derived wind observation errors using triple collocation
NASA Astrophysics Data System (ADS)
de Haan, Siebren
2016-08-01
Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method , on two independent observations at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained utilizing information from air traffic control surveillance radar with Selective Mode Enhanced Surveillance capabilities Mode-S EHS, see. Radial wind measurements from Doppler weather radar and wind vector measurements from sodar, together with equivalents from a non-hydrostatic numerical weather prediction model, are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind (zonal and meridional) observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.
Comparison of High-Frequency Solar Irradiance: Ground Measured vs. Satellite-Derived
Lave, Matthew; Weekley, Andrew
2016-11-21
High-frequency solar variability is an important to grid integration studies, but ground measurements are scarce. The high resolution irradiance algorithm (HRIA) has the ability to produce 4-sceond resolution global horizontal irradiance (GHI) samples, at locations across North America. However, the HRIA has not been extensively validated. In this work, we evaluate the HRIA against a database of 10 high-frequency ground-based measurements of irradiance. The evaluation focuses on variability-based metrics. This results in a greater understanding of the errors in the HRIA as well as suggestions for improvement to the HRIA.
Hope, Sarah A; Meredith, Ian T; Cameron, James D
2004-08-01
Transfer function techniques are increasingly used for non-invasive estimation of central aortic waveform characteristics. Non-invasive radial waveforms must be calibrated for this purpose. Most validation studies have used invasive pressures for calibration, with little data on the impact of non-invasive calibration on transfer-function-derived aortic waveform characteristics. In the present study, simultaneous invasive central aortic (Millar Mikro-tip catheter transducer) and non-invasive radial (Millar Mikro-tip tonometer) pressure waveforms and non-invasive brachial pressures (Dinamap) were measured in 42 subjects. In this cohort, radial waveforms were calibrated to both invasive and non-invasive mean and diastolic pressures. From each of these, central waveforms were reconstructed using a generalized transfer function obtained by us from a previous cohort [Hope, Tay, Meredith and Cameron (2002) Am. J. Physiol. Heart Circ. Physiol. 283, H1150-H1156]. Waveforms were analysed for parameters of potential clinical interest. For calibrated radial and reconstructed central waveforms, different methods of calibration were associated with differences in pressure (P<0.001), but not time parameters or augmentation index. Whereas invasive calibration resulted in little error in transfer function estimation of central systolic pressure (difference -1+/-8 mmHg; P=not significant), non-invasive calibration resulted in significant underestimation (7+/-12 mmHg; P<0.001). Errors in estimated aortic parameters differed with non-invasively calibrated untransformed radial and transfer-function-derived aortic waveforms (all P<0.01), with smaller absolute errors with untransformed radial waveforms for most pressure parameters [systolic pressure, 5+/-16 and 7+/-12 mmHg; pulse pressure, 0+/-16 and 4+/-12 mmHg (radial and derived aortic respectively)]. When only non-invasive pressures are accessible, analysis of untransformed radial waveforms apparently produces smaller errors in the
Shallow Water Sediment Properties Derived from High-Frequency Shear and Interface Waves
1992-04-10
FREQUENCY SHEAR ONR N00014-88-C-1238 AND INTERFACE WAVES 6. AUTHOR(S) JOHN EWING, JERRY A. CARTER, GEORGE H. SUTTON AND NOEL BARSTOW 7. PERFORMING...B4. PAGES 4739--4762. APRIL 10. 1992 Shallow Water Sediment Properties Derived From High-Frequency Shear and Interface Waves JOHN EWING Woods Hole...calculating thickness. The amplitude falloff with range establishes a Q velocity gradients and penetration depths [ Nettleton . 19401 estimate of 40 in
General-form 3-3-3 interpolation kernel and its simplified frequency-response derivation
NASA Astrophysics Data System (ADS)
Deng, Tian-Bo
2016-11-01
An interpolation kernel is required in a wide variety of signal processing applications such as image interpolation and timing adjustment in digital communications. This article presents a general-form interpolation kernel called 3-3-3 interpolation kernel and derives its frequency response in a closed-form by using a simple derivation method. This closed-form formula is preliminary to designing various 3-3-3 interpolation kernels subject to a set of design constraints. The 3-3-3 interpolation kernel is formed through utilising the third-degree piecewise polynomials, and it is an even-symmetric function. Thus, it will suffice to consider only its right-hand side when deriving its frequency response. Since the right-hand side of the interpolation kernel contains three piecewise polynomials of the third degree, i.e. the degrees of the three piecewise polynomials are (3,3,3), we call it the 3-3-3 interpolation kernel. Once the general-form frequency-response formula is derived, we can systematically formulate the design of various 3-3-3 interpolation kernels subject to a set of design constraints, which are targeted for different interpolation applications. Therefore, the closed-form frequency-response expression is preliminary to the optimal design of various 3-3-3 interpolation kernels. We will use an example to show the optimal design of a 3-3-3 interpolation kernel based on the closed-form frequency-response expression.
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Cohen, Gerald A.; Mroz, Zenon
1990-01-01
A uniform variational approach to sensitivity analysis of vibration frequencies and bifurcation loads of nonlinear structures is developed. Two methods of calculating the sensitivities of bifurcation buckling loads and vibration frequencies of nonlinear structures, with respect to stiffness and initial strain parameters, are presented. A direct method requires calculation of derivatives of the prebuckling state with respect to these parameters. An adjoint method bypasses the need for these derivatives by using instead the strain field associated with the second-order postbuckling state. An operator notation is used and the derivation is based on the principle of virtual work. The derivative computations are easily implemented in structural analysis programs. This is demonstrated by examples using a general purpose, finite element program and a shell-of-revolution program.
NASA Astrophysics Data System (ADS)
Rao, Kota S.; Al Jassar, Hala K.
2010-09-01
The aim of this paper is to analyze the errors in the Digital Elevation Models (DEMs) derived through repeat pass SAR interferometry (InSAR). Out of 29 ASAR images available to us, 8 are selected for this study which has unique data set forming 7 InSAR pairs with single master image. The perpendicular component of baseline (B highmod) varies between 200 to 400 m to generate good quality DEMs. The Temporal baseline (T) varies from 35 days to 525 days to see the effect of temporal decorrelation. It is expected that all the DEMs be similar to each other spatially with in the noise limits. However, they differ very much with one another. The 7 DEMs are compared with the DEM of SRTM for the estimation of errors. The spatial and temporal distribution of errors in the DEM is analyzed by considering several case studies. Spatial and temporal variability of precipitable water vapour is analysed. Precipitable water vapour (PWV) corrections to the DEMs are implemented and found to have no significant effect. The reasons are explained. Temporal decorrelation of phases and soil moisture variations seem to have influence on the accuracy of the derived DEM. It is suggested that installing a number of corner reflectors (CRs) and the use of Permanent Scatter approach may improve the accuracy of the results in desert test sites.
Scaringe, William A.; Li, Kai; Gu, Dongqing; Gonzalez, Kelly D.; Chen, Zhenbin; Hill, Kathleen A.; Sommer, Steve S.
2008-01-01
Somatic microindels (microdeletions with microinsertions) have been studied in normal mouse tissues using the Big Blue lacI transgenic mutation detection system. Here we analyze microindels in human cancers using an endogenous and transcribed gene, the TP53 gene. Microindel frequency, the enhancement of 1–2 microindels and other features are generally similar to that observed in the non-transcribed lacI gene in normal mouse tissues. The current larger sample of somatic microindels reveals recurroids: mutations in which deletions are identical and the co-localized insertion is similar. The data reveal that the inserted sequences derive from nearby but not adjacent sequences in contrast to the slippage that characterizes the great majority of pure microinsertions. The microindel inserted sequences derive from a template on the sense or antisense strand with similar frequency. The estimated error rate of the insertion process of 13% per bp is by far the largest reported in vivo, with the possible exception of somatic hypermutation in the immunoglobulin gene. The data constrain possible mechanisms of microindels and raise the question of whether microindels are ‘scars’ from the bypass of large DNA adducts by a translesional polymerase, e.g. the ‘Tarzan model’ presented herein. PMID:18632684
Scaringe, William A; Li, Kai; Gu, Dongqing; Gonzalez, Kelly D; Chen, Zhenbin; Hill, Kathleen A; Sommer, Steve S
2008-09-15
Somatic microindels (microdeletions with microinsertions) have been studied in normal mouse tissues using the Big Blue lacI transgenic mutation detection system. Here we analyze microindels in human cancers using an endogenous and transcribed gene, the TP53 gene. Microindel frequency, the enhancement of 1-2 microindels and other features are generally similar to that observed in the non-transcribed lacI gene in normal mouse tissues. The current larger sample of somatic microindels reveals recurroids: mutations in which deletions are identical and the co-localized insertion is similar. The data reveal that the inserted sequences derive from nearby but not adjacent sequences in contrast to the slippage that characterizes the great majority of pure microinsertions. The microindel inserted sequences derive from a template on the sense or antisense strand with similar frequency. The estimated error rate of the insertion process of 13% per bp is by far the largest reported in vivo, with the possible exception of somatic hypermutation in the immunoglobulin gene. The data constrain possible mechanisms of microindels and raise the question of whether microindels are 'scars' from the bypass of large DNA adducts by a translesional polymerase, e.g. the 'Tarzan model' presented herein.
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1996-01-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.
Rainfall intensity-duration-frequency relationships derived from large partial duration series
NASA Astrophysics Data System (ADS)
Ben-Zvi, Arie
2009-03-01
SummaryA procedure is proposed for basing intensity-duration-frequency (IDF) curves on partial duration series (PDS) which are substantially larger than those commonly used for this purpose. The PDS are derived from event maxima series (EMS), composed of the maximum average intensities, over a given duration, determined for all rainfall events recorded at a station. The generalized Pareto distribution (GP) is fitted to many PDS nested within the EMS and the goodness-of-fit is determined by the Anderson-Darling test. The best fitted distribution is selected for predicting intensities associated with the given duration and with a number of recurrence intervals. This procedure was repeated for eleven rainfall durations, from 5 to 240 min, at four stations of the Israel Meteorological Service. For comparison, the GP and the generalized extreme value (GEV) distributions were fitted to annual maxima series (AMS) and the Gumbel and lognormal distributions were fitted to the PDS and to the AMS at these stations. In almost all cases, the GP distribution well fits to ranges of PDS within an EMS, while in a few cases the best fit is fair only. Another result is that the GP distribution does not fit to AMS and to EMS. The GEV distribution well fits to most AMS, and fairly fits to the others. The Gumbel and the lognormal distributions well fit to most of the AMS and to a very few PDS. In most cases of good fits of different distributions, the predicted values by the different distributions are not much different from one another. This indicates the importance of good fit of the distribution and of the power of the AD test used for determining it. In most cases the best fit of the GP distribution is to a PDS series substantially larger than its corresponding AMS. In most cases, the standard error of the estimated 100-year intensity, through the best fitted GP to PDS, is smaller than that estimated through the GEV fitted to the corresponding AMS. All these make the proposed
A Benchmark Study on Error Assessment and Quality Control of CCS Reads Derived from the PacBio RS.
Jiao, Xiaoli; Zheng, Xin; Ma, Liang; Kutty, Geetha; Gogineni, Emile; Sun, Qiang; Sherman, Brad T; Hu, Xiaojun; Jones, Kristine; Raley, Castle; Tran, Bao; Munroe, David J; Stephens, Robert; Liang, Dun; Imamichi, Tomozumi; Kovacs, Joseph A; Lempicki, Richard A; Huang, Da Wei
2013-07-31
PacBio RS, a newly emerging third-generation DNA sequencing platform, is based on a real-time, single-molecule, nano-nitch sequencing technology that can generate very long reads (up to 20-kb) in contrast to the shorter reads produced by the first and second generation sequencing technologies. As a new platform, it is important to assess the sequencing error rate, as well as the quality control (QC) parameters associated with the PacBio sequence data. In this study, a mixture of 10 prior known, closely related DNA amplicons were sequenced using the PacBio RS sequencing platform. After aligning Circular Consensus Sequence (CCS) reads derived from the above sequencing experiment to the known reference sequences, we found that the median error rate was 2.5% without read QC, and improved to 1.3% with an SVM based multi-parameter QC method. In addition, a De Novo assembly was used as a downstream application to evaluate the effects of different QC approaches. This benchmark study indicates that even though CCS reads are post error-corrected it is still necessary to perform appropriate QC on CCS reads in order to produce successful downstream bioinformatics analytical results.
A Benchmark Study on Error Assessment and Quality Control of CCS Reads Derived from the PacBio RS
Jiao, Xiaoli; Zheng, Xin; Ma, Liang; Kutty, Geetha; Gogineni, Emile; Sun, Qiang; Sherman, Brad T.; Hu, Xiaojun; Jones, Kristine; Raley, Castle; Tran, Bao; Munroe, David J.; Stephens, Robert; Liang, Dun; Imamichi, Tomozumi; Kovacs, Joseph A.; Lempicki, Richard A.; Huang, Da Wei
2013-01-01
PacBio RS, a newly emerging third-generation DNA sequencing platform, is based on a real-time, single-molecule, nano-nitch sequencing technology that can generate very long reads (up to 20-kb) in contrast to the shorter reads produced by the first and second generation sequencing technologies. As a new platform, it is important to assess the sequencing error rate, as well as the quality control (QC) parameters associated with the PacBio sequence data. In this study, a mixture of 10 prior known, closely related DNA amplicons were sequenced using the PacBio RS sequencing platform. After aligning Circular Consensus Sequence (CCS) reads derived from the above sequencing experiment to the known reference sequences, we found that the median error rate was 2.5% without read QC, and improved to 1.3% with an SVM based multi-parameter QC method. In addition, a De Novo assembly was used as a downstream application to evaluate the effects of different QC approaches. This benchmark study indicates that even though CCS reads are post error-corrected it is still necessary to perform appropriate QC on CCS reads in order to produce successful downstream bioinformatics analytical results. PMID:24179701
Mandal, Diptasri M; Sorant, Alexa J M; Atwood, Larry D; Wilson, Alexander F; Bailey-Wilson, Joan E
2006-04-20
Studies of model-based linkage analysis show that trait or marker model misspecification leads to decreasing power or increasing Type I error rate. An increase in Type I error rate is seen when marker related parameters (e.g., allele frequencies) are misspecified and ascertainment is through the trait, but lod-score methods are expected to be robust when ascertainment is random (as is often the case in linkage studies of quantitative traits). In previous studies, the power of lod-score linkage analysis using the "correct" generating model for the trait was found to increase when the marker allele frequencies were misspecified and parental data were missing. An investigation of Type I error rates, conducted in the absence of parental genotype data and with misspecification of marker allele frequencies, showed that an inflation in Type I error rate was the cause of at least part of this apparent increased power. To investigate whether the observed inflation in Type I error rate in model-based LOD score linkage was due to sampling variation, the trait model was estimated from each sample using REGCHUNT, an automated segregation analysis program used to fit models by maximum likelihood using many different sets of initial parameter estimates. The Type I error rates observed using the trait models generated by REGCHUNT were usually closer to the nominal levels than those obtained when assuming the generating trait model. This suggests that the observed inflation of Type I error upon misspecification of marker allele frequencies is at least partially due to sampling variation. Thus, with missing parental genotype data, lod-score linkage is not as robust to misspecification of marker allele frequencies as has been commonly thought.
Mandal, Diptasri M; Sorant, Alexa JM; Atwood, Larry D; Wilson, Alexander F; Bailey-Wilson, Joan E
2006-01-01
Background Studies of model-based linkage analysis show that trait or marker model misspecification leads to decreasing power or increasing Type I error rate. An increase in Type I error rate is seen when marker related parameters (e.g., allele frequencies) are misspecified and ascertainment is through the trait, but lod-score methods are expected to be robust when ascertainment is random (as is often the case in linkage studies of quantitative traits). In previous studies, the power of lod-score linkage analysis using the "correct" generating model for the trait was found to increase when the marker allele frequencies were misspecified and parental data were missing. An investigation of Type I error rates, conducted in the absence of parental genotype data and with misspecification of marker allele frequencies, showed that an inflation in Type I error rate was the cause of at least part of this apparent increased power. To investigate whether the observed inflation in Type I error rate in model-based LOD score linkage was due to sampling variation, the trait model was estimated from each sample using REGCHUNT, an automated segregation analysis program used to fit models by maximum likelihood using many different sets of initial parameter estimates. Results The Type I error rates observed using the trait models generated by REGCHUNT were usually closer to the nominal levels than those obtained when assuming the generating trait model. Conclusion This suggests that the observed inflation of Type I error upon misspecification of marker allele frequencies is at least partially due to sampling variation. Thus, with missing parental genotype data, lod-score linkage is not as robust to misspecification of marker allele frequencies as has been commonly thought. PMID:16618369
Derivation of low flow frequency distributions under human activities and its implications
NASA Astrophysics Data System (ADS)
Gao, Shida; Liu, Pan; Pan, Zhengke; Ming, Bo; Guo, Shenglian; Xiong, Lihua
2017-06-01
Low flow, refers to a minimum streamflow in dry seasons, is crucial to water supply, agricultural irrigation and navigation. Human activities, such as groundwater pumping, influence low flow severely. In order to derive the low flow frequency distribution functions under human activities, this study incorporates groundwater pumping and return flow as variables in the recession process. Steps are as follows: (1) the original low flow without human activities is assumed to follow a Pearson type three distribution, (2) the probability distribution of climatic dry spell periods is derived based on a base flow recession model, (3) the base flow recession model is updated under human activities, and (4) the low flow distribution under human activities is obtained based on the derived probability distribution of dry spell periods and the updated base flow recession model. Linear and nonlinear reservoir models are used to describe the base flow recession, respectively. The Wudinghe basin is chosen for the case study, with daily streamflow observations during 1958-2000. Results show that human activities change the location parameter of the low flow frequency curve for the linear reservoir model, while alter the frequency distribution function for the nonlinear one. It is indicated that alter the parameters of the low flow frequency distribution is not always feasible to tackle the changing environment.
NASA Astrophysics Data System (ADS)
Alstad, K. P.; Venterea, R. T.; Tan, S. M.; Saad, N.
2015-12-01
Understanding chamber-based soil flux model fitting and measurement error is key to scaling soils GHG emissions and resolving the primary uncertainties in climate and management feedbacks at regional scales. One key challenge is the selection of the correct empirical model applied to soil flux rate analysis in chamber-based experiments. Another challenge is the characterization of error in the chamber measurement. Traditionally, most chamber-based N2O and CH4 measurements and model derivations have used discrete sampling for GC analysis, and have been conducted using extended chamber deployment periods (DP) which are expected to result in substantial alteration of the pre-deployment flux. The development of high-precision, high-frequency CRDS analyzers has advanced the science of soil flux analysis by facilitating much shorter DP and, in theory, less chamber-induced suppression of the soil-atmosphere diffusion gradient. As well, a new software tool developed by Picarro (the "Soil Flux Processor" or "SFP") links the power of Cavity Ring-Down Spectroscopy (CRDS) technology with an easy-to-use interface that features flexible sample-ID and run-schemes, and provides real-time monitoring of chamber accumulations and environmental conditions. The SFP also includes a sophisticated flux analysis interface which offers a user-defined model selection, including three predominant fit algorithms as default, and an open-code interface for user-composed algorithms. The SFP is designed to couple with the Picarro G2508 system, an analyzer which simplifies soils flux studies by simultaneously measuring primary GHG species -- N2O, CH4, CO2 and H2O. In this study, Picarro partners with the ARS USDA Soil & Water Management Research Unit (R. Venterea, St. Paul), to examine the degree to which the high-precision, high-frequency Picarro analyzer allows for much shorter DPs periods in chamber-based flux analysis, and, in theory, less chamber-induced suppression of the soil
GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm
NASA Technical Reports Server (NTRS)
Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.
2003-01-01
The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.
GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm
NASA Technical Reports Server (NTRS)
Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.
2003-01-01
The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.
GOME total ozone and calibration error derived using Version 8 TOMS Algorithm
NASA Astrophysics Data System (ADS)
Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.
2003-04-01
The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local structure as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The 1b detector appears to be quite well behaved throughout this time period.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Rodgers, E. B.
1977-01-01
An advanced Man-Interactive image and data processing system (AOIPS) was developed to extract basic meteorological parameters from satellite data and to perform further analyses. The errors in the satellite derived cloud wind fields for tropical cyclones are investigated. The propagation of these errors through the AOIPS system and their effects on the analysis of horizontal divergence and relative vorticity are evaluated.
Nugent, Allison C; Luber, Bruce; Carver, Frederick W; Robinson, Stephen E; Coppola, Richard; Zarate, Carlos A
2017-02-01
Recently, independent components analysis (ICA) of resting state magnetoencephalography (MEG) recordings has revealed resting state networks (RSNs) that exhibit fluctuations of band-limited power envelopes. Most of the work in this area has concentrated on networks derived from the power envelope of beta bandpass-filtered data. Although research has demonstrated that most networks show maximal correlation in the beta band, little is known about how spatial patterns of correlations may differ across frequencies. This study analyzed MEG data from 18 healthy subjects to determine if the spatial patterns of RSNs differed between delta, theta, alpha, beta, gamma, and high gamma frequency bands. To validate our method, we focused on the sensorimotor network, which is well-characterized and robust in both MEG and functional magnetic resonance imaging (fMRI) resting state data. Synthetic aperture magnetometry (SAM) was used to project signals into anatomical source space separately in each band before a group temporal ICA was performed over all subjects and bands. This method preserved the inherent correlation structure of the data and reflected connectivity derived from single-band ICA, but also allowed identification of spatial spectral modes that are consistent across subjects. The implications of these results on our understanding of sensorimotor function are discussed, as are the potential applications of this technique. Hum Brain Mapp 38:779-791, 2017. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.
2011-01-01
Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.
NASA Technical Reports Server (NTRS)
Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.
2011-01-01
Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.
NASA Astrophysics Data System (ADS)
Gherm, Vadim E.; Zernov, Nikolay N.; Strangeways, Hal J.
2011-06-01
It can be important to determine the correlation of different frequency signals in L band that have followed transionospheric paths. In the future, both GPS and the new Galileo satellite system will broadcast three frequencies enabling more advanced three frequency correction schemes so that knowledge of correlations of different frequency pairs for scintillation conditions is desirable. Even at present, it would be helpful to know how dual-frequency Global Navigation Satellite Systems positioning can be affected by lack of correlation between the L1 and L2 signals. To treat this problem of signal correlation for the case of strong scintillation, a previously constructed simulator program, based on the hybrid method, has been further modified to simulate the fields for both frequencies on the ground, taking account of their cross correlation. Then, the errors in the two-frequency range finding method caused by scintillation have been estimated for particular ionospheric conditions and for a realistic fully three-dimensional model of the ionospheric turbulence. The results which are presented for five different frequency pairs (L1/L2, L1/L3, L1/L5, L2/L3, and L2/L5) show the dependence of diffractional errors on the scintillation index S4 and that the errors diverge from a linear relationship, the stronger are scintillation effects, and may reach up to ten centimeters, or more. The correlation of the phases at spaced frequencies has also been studied and found that the correlation coefficients for different pairs of frequencies depend on the procedure of phase retrieval, and reduce slowly as both the variance of the electron density fluctuations and cycle slips increase.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
Written Type and Token Frequency Measures of Fifty Spanish Derivational Morphemes.
Lázaro, Miguel; Acha, Joana; Illera, Víctor; Sainz, Javier S
2016-11-08
Several databases of written language exist in Spanish that manage important information on the lexical and sublexical characteristics of words. However, there is no database with information on the productivity and frequency of use of derivational suffixes: sublexical units with an essential role in the formation of orthographic representations and lexical access. This work examines these two measures, known as type and token frequencies, for a series of 50 derivational suffixes and their corresponding orthographic endings. Derivational suffixes are differentiated from orthographic endings by eliminating pseudoaffixed words from the list of orthographic endings (cerveza [beer] is a simple word despite its ending in -eza). We provide separate data for child and adult populations, using two databases commonly accessed by psycholinguists conducting research in Spanish. We describe the filtering process used to obtain descriptive data that will provide information for future research on token and type frequencies of morphemes. This database is an important development for researchers focusing on the role of morphology in lexical acquisition and access.
2014-09-01
hour; for our initial computation, we used a solar EUV irradiance uncertainty of 7% at a forecast time of 7 days, so that the forecast error at the...We developed approximate expressions for the propagation of solar irradiance forecast errors propagate to atmospheric density forecasts to in-track...trajectories of most objects in low-Earth orbit, and solar variability is the largest source of error in upper atmospheric density forecasts . There is
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Tarasenko, Nicholas; Lane, Steven
2017-01-01
Since October 2015, NASA Glenn Research Center (GRC) and the Air Force Research Laboratory (AFRL) have collaboratively operated an RF terrestrial link in Albuquerque, New Mexico to characterize atmospheric propagation phenomena at 72 and 84 GHz. The WV-band Terrestrial Link Experiment (WTLE) consists of coherent transmitters at each frequency on the crest of the Sandia Mountains and a corresponding pair of receivers in south Albuquerque. The beacon receivers provide a direct measurement of the link attenuation, while concurrent weather instrumentation provides a measurement of the atmospheric conditions.Among the available weather instruments is an optical disdrometer which yields an optical measurement of rain rate, as well as droplet size and velocity distributions (DSD, DVD). In particular, the DSD can be used to derive an instantaneous scaling factor (ISF) by which the measured data at one frequency can be scaled to another for example, scaling the 72 GHz to an expected 84 GHz timeseries. Given the availability of both the DSD prediction and the directly observed 84 GHz attenuation, WTLE is thus uniquely able assess DSD-derived instantaneous frequency scaling at the VW-bands. Previous work along these lines has investigated the DSD-derived ISF at Ka and Q-band (20 GHz to 40 GHz) using a satellite beacon receiver experiment in Milan, Italy [1-3]. This work will expand the investigation to terrestrial links in the VW-bands, where the frequency scaling factor is lower and where the link is also much more sensitive to attenuation by rain, clouds, and other atmospheric effects.
NASA Astrophysics Data System (ADS)
Zhang, Feifei; Dou, Xiankang; Sun, Dongsong; Shu, Zhifeng; Xia, Haiyun; Gao, Yuanyuan; Hu, Dongdong; Shangguan, Mingjia
2014-12-01
Direct detection Doppler wind lidar (DWL) has been demonstrated for its capability of atmospheric wind detection ranging from the troposphere to stratosphere with high temporal and spatial resolution. We design and describe a fiber-based optical receiver for direct detection DWL. Then the locking error of the relative laser frequency is analyzed and the dependent variables turn out to be the relative error of the calibrated constant and the slope of the transmission function. For high accuracy measurement of the calibrated constant for a fiber-based system, an integrating sphere is employed for its uniform scattering. What is more, the feature of temporally widening the pulse laser allows more samples be acquired for the analog-to-digital card of the same sampling rate. The result shows a relative error of 0.7% for a calibrated constant. For the latter, a new improved locking filter for a Fabry-Perot Interferometer was considered and designed with a larger slope. With these two strategies, the locking error for the relative laser frequency is calculated to be about 3 MHz, which is equivalent to a radial velocity of about 0.53 m/s and demonstrates the effective improvements of frequency locking for a robust DWL.
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2017-04-01
This paper concerns a fast, one-step iterative technique of imaging extended perfectly conducting cracks with Dirichlet boundary condition. In order to reconstruct the shape of cracks from scattered field data measured at the boundary, we introduce a topological derivative-based electromagnetic imaging functional operated at several nonzero frequencies. The structure of the imaging functionals is carefully analyzed by establishing relationships with infinite series of Bessel functions for the configurations of both symmetric and non-symmetric incident field directions. Identified structure explains why the application of incident fields with symmetric direction operated at multiple frequencies guarantees a successful reconstruction. Various numerical simulations with noise-corrupted data are conducted to assess the performance, effectiveness, robustness, and limitations of the proposed technique.
System for adjusting frequency of electrical output pulses derived from an oscillator
Bartholomew, David B.
2006-11-14
A system for setting and adjusting a frequency of electrical output pulses derived from an oscillator in a network is disclosed. The system comprises an accumulator module configured to receive pulses from an oscillator and to output an accumulated value. An adjustor module is configured to store an adjustor value used to correct local oscillator drift. A digital adder adds values from the accumulator module to values stored in the adjustor module and outputs their sums to the accumulator module, where they are stored. The digital adder also outputs an electrical pulse to a logic module. The logic module is in electrical communication with the adjustor module and the network. The logic module may change the value stored in the adjustor module to compensate for local oscillator drift or change the frequency of output pulses. The logic module may also keep time and calculate drift.
Tahara, Tatsuki; Shimozato, Yuki; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Matoba, Osamu; Kubota, Toshihiro
2012-01-15
We propose a single-shot digital holography in which the complex amplitude distribution is obtained by spatial-carrier phase-shifting (SCPS) interferometry and the correction of the inherent phase-shift error occurred in this interferometry. The 0th order diffraction wave and the conjugate image are removed by phase-shifting interferometry and Fourier transform technique, respectively. The inherent error is corrected in the spatial frequency domain. The proposed technique does not require an iteration process to remove the unwanted images and has an advantage in the field of view in comparison to a conventional SCPS technique.
NASA Astrophysics Data System (ADS)
Thyer, Mark; Li, Jing; Lambert, Martin; Kuczera, George; Metcalfe, Andrew
2015-04-01
Flood extremes are driven by highly variable and complex climatic and hydrological processes. Derived flood frequency methods are often used to predict the flood frequency distribution (FFD) because they can provide predictions in ungauged catchments and evaluate the impact of land-use or climate change. This study presents recent work on development of a new derived flood frequency method called the hybrid causative events (HCE) approach. The advantage of the HCE approach is that it combines the accuracy of the continuous simulation approach with the computational efficiency of the event-based approaches. Derived flood frequency methods, can be divided into two classes. Event-based approaches provide fast estimation, but can also lead to prediction bias due to limitations of inherent assumptions required for obtaining input information (rainfall and catchment wetness) for events that cause large floods. Continuous simulation produces more accurate predictions, however, at the cost of massive computational time. The HCE method uses a short continuous simulation to provide inputs for a rainfall-runoff model running in an event-based fashion. A proof-of-concept pilot study that the HCE produces estimates of the flood frequency distribution with similar accuracy as the continuous simulation, but with dramatically reduced computation time. Recent work incorporated seasonality into the HCE approach and evaluated with a more realistic set of eight sites from a wide range of climate zones, typical of Australia, using a virtual catchment approach. The seasonal hybrid-CE provided accurate predictions of the FFD for all sites. Comparison with the existing non-seasonal hybrid-CE showed that for some sites the non-seasonal hybrid-CE significantly over-predicted the FFD. Analysis of the underlying cause of whether a site had a high, low or no need to use seasonality found it was based on a combination of reasons, that were difficult to predict apriori. Hence it is recommended
NASA Astrophysics Data System (ADS)
Pankratov, Oleg; Kuvshinov, Alexei
2010-04-01
Electromagnetic (EM) studies of the Earth have advanced significantly over the past few years. This progress was driven, in particular, by new developments in the methods of 3-D inversion of EM data. Due to the large scale of the 3-D EM inverse problems, iterative gradient-type methods have mostly been employed. In these methods one has to calculate multiple times the gradient of the penalty function-a sum of misfit and regularization terms-with respect to the model parameters. However, even with modern computational capabilities the straightforward calculation of the misfit gradients based on numerical differentiation is extremely time consuming. Much more efficient and elegant way to calculate the gradient of the misfit is provided by the so-called `adjoint' approach. This is now widely used in many 3-D numerical schemes for inverting EM data of different types and origin. It allows the calculation of the misfit gradient for the price of only a few additional forward calculations. In spite of its popularity we did not find in the literature any general description of the approach, which would allow researchers to apply this methodology in a straightforward manner to their scenario of interest. In the paper, we present formalism for the efficient calculation of the derivatives of EM frequency-domain responses and the derivatives of the misfit with respect to variations of 3-D isotropic/anisotropic conductivity. The approach is rather general; it works with single-site responses, multisite responses and responses that include spatial derivatives of EM field. The formalism also allows for various types of parametrization of the 3-D conductivity distribution. Using this methodology one can readily obtain appropriate formulae for the specific sounding methods. To illustrate the concept we provide such formulae for a number of EM techniques: geomagnetic depth sounding (GDS), conventional and generalized magnetotellurics, the magnetovariational method, horizontal
Frequency and origins of hemoglobin S mutation in African-derived Brazilian populations.
De Mello Auricchio, Maria Teresa Balester; Vicente, João Pedro; Meyer, Diogo; Mingroni-Netto, Regina Célia
2007-12-01
Africans arrived in Brazil as slaves in great numbers, mainly after 1550. Before the abolition of slavery in Brazil in 1888, many communities, called quilombos, were formed by runaway or abandoned African slaves. These communities are presently referred to as remnants of quilombos, and many are still partially genetically isolated. These remnants can be regarded as relicts of the original African genetic contribution to the Brazilian population. In this study we assessed frequencies and probable geographic origins of hemoglobin S (HBB*S) mutations in remnants of quilombo populations in the Ribeira River valley, São Paulo, Brazil, to reconstruct the history of African-derived populations in the region. We screened for HBB*S mutations in 11 quilombo populations (1,058 samples) and found HBB*S carrier frequencies that ranged from 0% to 14%. We analyzed beta-globin gene cluster haplotypes linked to the HBB*S mutation in 86 chromosomes and found the four known African haplotypes: 70 (81.4%) Bantu (Central Africa Republic), 7 (8.1%) Benin, 7 (8.1%) Senegal, and 2 (2.3%) Cameroon haplotypes. One sickle cell homozygote was Bantu/Bantu and two homozygotes had Bantu/Benin combinations. The high frequency of the sickle cell trait and the diversity of HBB*S linked haplotypes indicate that Brazilian remnants of quilombos are interesting repositories of genetic diversity present in the ancestral African populations.
NASA Astrophysics Data System (ADS)
Liu, Wei; Sneeuw, Nico; Jiang, Weiping
2017-04-01
GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.
García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan
2009-02-01
An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.
2011-01-01
Introduction Continuous cardiac output monitoring is used for early detection of hemodynamic instability and guidance of therapy in critically ill patients. Recently, the accuracy of pulse contour-derived cardiac output (PCCO) has been questioned in different clinical situations. In this study, we examined agreement between PCCO and transcardiopulmonary thermodilution cardiac output (COTCP) in critically ill patients, with special emphasis on norepinephrine (NE) administration and the time interval between calibrations. Methods This prospective, observational study was performed with a sample of 73 patients (mean age, 63 ± 13 years) requiring invasive hemodynamic monitoring on a non-cardiac surgery intensive care unit. PCCO was recorded immediately before calibration by COTCP. Bland-Altman analysis was performed on data subsets comparing agreement between PCCO and COTCP according to NE dosage and the time interval between calibrations up to 24 hours. Further, central artery stiffness was calculated on the basis of the pulse pressure to stroke volume relationship. Results A total of 330 data pairs were analyzed. For all data pairs, the mean COTCP (±SD) was 8.2 ± 2.0 L/min. PCCO had a mean bias of 0.16 L/min with limits of agreement of -2.81 to 3.15 L/min (percentage error, 38%) when compared to COTCP. Whereas the bias between PCCO and COTCP was not significantly different between NE dosage categories or categories of time elapsed between calibrations, interchangeability (percentage error <30%) between methods was present only in the high NE dosage subgroup (≥0.1 μg/kg/min), as the percentage errors were 40%, 47% and 28% in the no NE, NE < 0.1 and NE ≥ 0.1 μg/kg/min subgroups, respectively. PCCO was not interchangeable with COTCP in subgroups of different calibration intervals. The high NE dosage group showed significantly increased central artery stiffness. Conclusions This study shows that NE dosage, but not the time interval between calibrations, has an
2014-01-01
Background In order to characterize the intracranial pressure-volume reserve capacity, the correlation coefficient (R) between the ICP wave amplitude (A) and the mean ICP level (P), the RAP index, has been used to improve the diagnostic value of ICP monitoring. Baseline pressure errors (BPEs), caused by spontaneous shifts or drifts in baseline pressure, cause erroneous readings of mean ICP. Consequently, BPEs could also affect ICP indices such as the RAP where in the mean ICP is incorporated. Methods A prospective, observational study was carried out on patients with aneurysmal subarachnoid hemorrhage (aSAH) undergoing ICP monitoring as part of their surveillance. Via the same burr hole in the scull, two separate ICP sensors were placed close to each other. For each consecutive 6-sec time window, the dynamic mean ICP wave amplitude (MWA; measure of the amplitude of the single pressure waves) and the static mean ICP, were computed. The RAP index was computed as the Pearson correlation coefficient between the MWA and the mean ICP for 40 6-sec time windows, i.e. every subsequent 4-min period (method 1). We compared this approach with a method of calculating RAP using a 4-min moving window updated every 6 seconds (method 2). Results The study included 16 aSAH patients. We compared 43,653 4-min RAP observations of signals 1 and 2 (method 1), and 1,727,000 6-sec RAP observations (method 2). The two methods of calculating RAP produced similar results. Differences in RAP ≥0.4 in at least 7% of observations were seen in 5/16 (31%) patients. Moreover, the combination of a RAP of ≥0.6 in one signal and <0.6 in the other was seen in ≥13% of RAP-observations in 4/16 (25%) patients, and in ≥8% in another 4/16 (25%) patients. The frequency of differences in RAP >0.2 was significantly associated with the frequency of BPEs (5 mmHg ≤ BPE <10 mmHg). Conclusions Simultaneous monitoring from two separate, close-by ICP sensors reveals significant differences in RAP that
Validation of slant delays derived from single and dual frequency GPS data
NASA Astrophysics Data System (ADS)
Deng, Z.; Dick, G.; Zus, F.; Ge, M.; Bender, M.; Wickert, J.
2010-05-01
Improved knowledge of the humidity distribution is very important for a variety of atmospheric research applications. During the last years the potential of GPS derived tropospheric products, e.g. zenith total delays (ZTD), slant total delays (STD), with high temporal resolution have been demonstrated. The spatial resolution depends on the network density, which needs to be improved for such meteorological applications, as high resolution numerical forecast models. Another application is the water vapor tomography, which can be used to resolve the spatial structure and temporal variations of the tropospheric water vapor. The GPS derived STDs are used here as input data. To reconstruct reliable vertical profiles, a large number of STD observations covering the complete region from a wide range of angels is required. Due to economic reasons, the network densification is recommended with single frequency (SF) receivers. The Satellite-specific Epoch-differenced Ionospheric Delay model (SEID) has been developed at the Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences to estimate ionospheric corrections for SF receivers embedded in networks of dual-frequency (DF) receivers. With these corrections the SF GPS data can be processed in the same way as the DF data. It has been proved, that the SEID model is sufficient for estimating tropospheric products as well as station coordinates from SF data. The easy implementation and the accuracy of the SEID may speed up the densification of existing networks with SF receivers. After introducing the SEID model, the validation results of SF and DF derived tropospheric products will be presented. Currently the very sparse character of independent observations makes it difficult to assess the anticipated high quality of DF & SF STD data processed for a large network of continuously operating receivers. Therefore monitoring of GPS derived STD data against weather analysis is an alternative. To compare STDs with their
NASA Astrophysics Data System (ADS)
Lu, Xiao-Jing; Chen, Xi; Ruschhaupt, A.; Alonso, D.; Guérin, S.; Muga, J. G.
2013-09-01
We design, by invariant-based inverse engineering, driving fields that invert the population of a two-level atom in a given time, robustly with respect to dephasing noise and/or systematic frequency shifts. Without imposing constraints, optimal protocols are insensitive to the perturbations but need an infinite energy. For a constrained value of the Rabi frequency, a flat π pulse is the least sensitive protocol to phase noise but not to systematic frequency shifts, for which we describe and optimize a family of protocols.
Large-scale derived flood frequency analysis based on continuous simulation
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several
The Huygens Doppler Wind Experiment - Titan Winds Derived from Probe Radio Frequency Measurements
NASA Astrophysics Data System (ADS)
Bird, M. K.; Dutta-Roy, R.; Heyl, M.; Allison, M.; Asmar, S. W.; Folkner, W. M.; Preston, R. A.; Atkinson, D. H.; Edenhofer, P.; Plettemeier, D.; Wohlmuth, R.; Iess, L.; Tyler, G. L.
2002-07-01
A Doppler Wind Experiment (DWE) will be performed during the Titan atmospheric descent of the ESA Huygens Probe. The direction and strength of Titan's zonal winds will be determined with an accuracy better than 1 m s-1 from the start of mission at an altitude of ˜160 km down to the surface. The Probe's wind-induced horizontal motion will be derived from the residual Doppler shift of its S-band radio link to the Cassini Orbiter, corrected for all known orbit and propagation effects. It is also planned to record the frequency of the Probe signal using large ground-based antennas, thereby providing an additional component of the horizontal drift. In addition to the winds, DWE will obtain valuable information on the rotation, parachute swing and atmospheric buffeting of the Huygens Probe, as well as its position and attitude after Titan touchdown. The DWE measurement strategy relies on experimenter-supplied Ultra-Stable Oscillators to generate the transmitted signal from the Probe and to extract the frequency of the received signal on the Orbiter. Results of the first in-flight checkout, as well as the DWE Doppler calibrations conducted with simulated Huygens signals uplinked from ground (Probe Relay Tests), are described. Ongoing efforts to measure and model Titan's winds using various Earth-based techniques are briefly reviewed.
The Huygens Doppler Wind Experiment - Titan Winds Derived from Probe Radio Frequency Measurements
NASA Astrophysics Data System (ADS)
Bird, M. K.; Dutta-Roy, R.; Heyl, M.; Allison, M.; Asmar, S. W.; Folkner, W. M.; Preston, R. A.; Atkinson, D. H.; Edenhofer, P.; Plettemeier, D.; Wohlmuth, R.; Iess, L.; Tyler, G. L.
2002-07-01
A Doppler Wind Experiment (DWE) will be performed during the Titan atmospheric descent of the ESA Huygens Probe. The direction and strength of Titan's zonal winds will be determined with an accuracy better than 1 m s-1 from the start of mission at an altitude of ~160 km down to the surface. The Probe's wind-induced horizontal motion will be derived from the residual Doppler shift of its S-band radio link to the Cassini Orbiter, corrected for all known orbit and propagation effects. It is also planned to record the frequency of the Probe signal using large ground-based antennas, thereby providing an additional component of the horizontal drift. In addition to the winds, DWE will obtain valuable information on the rotation, parachute swing and atmospheric buffeting of the Huygens Probe, as well as its position and attitude after Titan touchdown. The DWE measurement strategy relies on experimenter-supplied Ultra-Stable Oscillators to generate the transmitted signal from the Probe and to extract the frequency of the received signal on the Orbiter. Results of the first in-flight checkout, as well as the DWE Doppler calibrations conducted with simulated Huygens signals uplinked from ground (Probe Relay Tests), are described. Ongoing efforts to measure and model Titan's winds using various Earth-based techniques are briefly reviewed.
NASA Astrophysics Data System (ADS)
Kuhn, Michael; Hirt, Christian
2016-09-01
In gravity forward modelling, the concept of Rock-Equivalent Topography (RET) is often used to simplify the computation of gravity implied by rock, water, ice and other topographic masses. In the RET concept, topographic masses are compressed (approximated) into equivalent rock, allowing the use of a single constant mass-density value. Many studies acknowledge the approximate character of the RET, but few have attempted yet to quantify and analyse the approximation errors in detail for various gravity field functionals and heights of computation points. Here, we provide an in-depth examination of approximation errors associated with the RET compression for the topographic gravitational potential and its first- and second-order derivatives. Using the Earth2014 layered topography suite we apply Newtonian integration in the spatial domain in the variants (a) rigorous forward modelling of all mass bodies, (b) approximative modelling using RET. The differences among both variants, which reflect the RET approximation error, are formed and studied for an ensemble of 10 different gravity field functionals at three levels of altitude (on and 3 km above the Earth's surface and at 250 km satellite height). The approximation errors are found to be largest at the Earth's surface over RET compression areas (oceans, ice shields) and to increase for the first- and second-order derivatives. Relative errors, computed here as ratio between the range of differences between both variants relative to the range in signal, are at the level of 0.06-0.08 % for the potential, ˜ 3-7 % for the first-order derivatives at the Earth's surface (˜ 0.1 % at satellite altitude). For the second-order derivatives, relative errors are below 1 % at satellite altitude, at the 10-20 % level at 3 km and reach maximum values as large as ˜ 20 to 110 % near the surface. As such, the RET approximation errors may be acceptable for functionals computed far away from the Earth's surface or studies focussing on
Multi-frequency acoustic derivation of particle size using 'off-the-shelf" ADCPs.
NASA Astrophysics Data System (ADS)
Haught, D. R.; Wright, S. A.; Venditti, J. G.; Church, M. A.
2015-12-01
Suspended sediment particle size in rivers is of great interest due to its influence on riverine and coastal morphology, socio-economic viability, and ecological health and restoration. Prediction of suspended sediment transport from hydraulics remains a stubbornly difficult problem, particularly for the washload component, which is controlled by sediment supply from the drainage basin. This has led to a number of methods for continuously monitoring suspended sediment concentration and mean particle size, the most popular currently being hydroacoustic methods. Here, we explore the possibility of using theoretical inversion of the sonar equation to derive an estimate of mean particle size and standard deviation of the grain size distribution (GSD) using three 'off-the-shelf' acoustic Doppler current profiles (ADCP) with frequencies of 300, 600 and 1200 kHz. The instruments were deployed in the sand-bedded reach of the Fraser River, British Columbia. We use bottle samples collected in the acoustic beams to test acoustics signal inversion methods. Concentrations range from 15-300 mg/L and the suspended load at the site is ~25% sand, ~75 % silt/clay. Measured mean particle radius from samples ranged from 10-40 microns with relative standard deviations ranging from 0.75 to 2.5. Initial results indicate the acoustically derived mean particle radius compares well with measured particle radius, using a theoretical inversion method adapted to the Fraser River sediment.
Frequency-dependent tsunami-amplification factor derived from tsunami numerical simulations
NASA Astrophysics Data System (ADS)
Tsushima, H.
2016-12-01
I develop frequency-dependent tsunami-amplification factor for real-time correction of tsunami site response for tsunami early warning. A tsunami waveform at an observing point can be modeled by convolution of source, path and site effects in time domain. When we compare tsunami waveforms at observing points between outside and inside a bay, source and path effects can be regarded as equal. Thus, spectral ratio of the two waveforms gives frequency-dependent tsunami-amplification factor. If such amplification factor is prepared in advance of earthquake, its real-time convolution to offshore tsunami waveform provides tsunami prediction at coastal site. In this study, numerical tsunami simulations from many earthquakes were performed to synthesize tsunami waveforms that were used in spectral-ratio analysis. Then, I calculate average of the resulted spectral ratios to obtain frequency-dependent tsunami-amplification factor. Source models of magnitude 7.5-8.7 interplate earthquakes were assumed at 26 locations along the Japan-Kuril trenches, and then the resultant tsunamis were calculated numerically to synthesize 4-hour tsunami waveforms at observing points along the Japanese coast. Two tsunami simulations were performed for each source: one is based on nonlinear long wave theory, and the other is based on linear long wave theory. I focus on tsunami-amplification factor at Miyako bay, northeastern Japan. The resultant tsunami-height spectral ratio between the center of Miyako bay and the outside show two peaks at wave-periods of 20 and 40 min. These peak amplitudes derived from the nonlinear tsunami simulations are smaller than those from the linear simulations. It may be caused by energy attenuation due to bottom friction. On the other hand, in the spectral ratio between the closed-off section of the bay and the outside, peak at 20-min period cannot be seen. This indicates that frequency-dependent amplification factor may depend on location even in the same bay. These
Xu, Renfeng; Bradley, Arthur; Thibos, Larry N.
2013-01-01
Purpose We tested the hypothesis that pupil apodization is the basis for central pupil bias of spherical refractions in eyes with spherical aberration. Methods We employed Fourier computational optics in which we vary spherical aberration levels, pupil size, and pupil apodization (Stiles Crawford Effect) within the pupil function, from which point spread functions and optical transfer functions were computed. Through-focus analysis determined the refractive correction that optimized retinal image quality. Results For a large pupil (7 mm), as spherical aberration levels increase, refractions that optimize the visual Strehl ratio mirror refractions that maximize high spatial frequency modulation in the image and both focus a near paraxial region of the pupil. These refractions are not affected by Stiles Crawford Effect apodization. Refractions that optimize low spatial frequency modulation come close to minimizing wavefront RMS, and vary with level of spherical aberration and Stiles Crawford Effect. In the presence of significant levels of spherical aberration (e.g. C40 = 0.4 µm, 7mm pupil), low spatial frequency refractions can induce −0.7D myopic shift compared to high SF refraction, and refractions that maximize image contrast of a 3 cycle per degree square-wave grating can cause −0.75D myopic drift relative to refractions that maximize image sharpness. Discussion Because of small depth of focus associated with high spatial frequency stimuli, the large change in dioptric power across the pupil caused by spherical aberration limits the effective aperture contributing to the image of high spatial frequencies. Thus, when imaging high spatial frequencies, spherical aberration effectively induces an annular aperture defining that portion of the pupil contributing to a well-focused image. As spherical focus is manipulated during the refraction procedure, the dimensions of the annular aperture change. Image quality is maximized when the inner radius of the induced
Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
NASA Astrophysics Data System (ADS)
Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.
2011-03-01
Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.
NASA Astrophysics Data System (ADS)
Dobeš, Josef; Grábner, Martin; Puričer, Pavel; Vejražka, František; Míchal, Jan; Popp, Jakub
2017-05-01
Nowadays, there exist relatively precise pHEMT models available for computer-aided design, and they are frequently compared to each other. However, such comparisons are mostly based on absolute errors of drain-current equations and their derivatives. In the paper, a novel method is suggested based on relative root-mean-square errors of both drain current and its derivatives up to the third order. Moreover, the relative errors are subsequently relativized to the best model in each category to further clarify obtained accuracies of both drain current and its derivatives. Furthermore, one our older and two newly suggested models are also included in comparison with the traditionally precise Ahmed, TOM-2 and Materka ones. The assessment is performed using measured characteristics of a pHEMT operating up to 110 GHz. Finally, a usability of the proposed models including the higher-order derivatives is illustrated using s-parameters analysis and measurement at more operating points as well as computation and measurement of IP3 points of a low-noise amplifier of a multi-constellation satellite navigation receiver with ATF-54143 pHEMT.
Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-04-18
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.
Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-01-01
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865
1977-01-10
This report is the third in a series of three that evaluate a technique (frequency-domain Prony) for obtaining the poles of a transfer function. The...main objective was to assess the feasibility of classifying or identifying ship-like targets by using pole sets derived from frequency-domain data. A...predictor-correlator procedure for using spectral data and library pole sets for this purpose was developed. Also studied was an iterative method for
NASA Technical Reports Server (NTRS)
Greenhall, Charles A.
1996-01-01
The phase of a frequency standard that uses periodic interrogation and control of a local oscillator (LO) is degraded by a long-term random-walk component induced by downconversion of LO noise into the loop passband. The Dick formula for the noise level of this degradation can be derived from explicit solotions of two LO control-loop models. A summary of the derivations is given here.
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Cobb, K. M.; Mann, M. E.; Rutherford, S.
2008-12-01
Tropical Pacific sea-surface temperatures can organize climate variability at near-global scales, and since there is wide disagreement over their projected course under greenhouse forcing, it is of considerable interest to understand their evolution over the past millennium. We use the most recent high-resolution proxy data from ENSO-sensitive regions, together with the RegEM climate field reconstruction technique [Schneider, 2001, Rutherford et al, 2003, Mann et al, 2007], to extend the history of the NINO3 index at decadal scales through 1000 A.D. We present a new algorithm implementing an objective regularization technique that preserves low-frequency variance in RegEM (ITTLS).Synthetic SST and pseudoproxy tests using a realistic ENSO model are used to test the accuracy of estimated low-frequency tropical climate variability with this method The reconstruction shows important decadal and centennial variability throughout the millennium, in the context of which the twentieth century does not appear anomalous. We analyze the sensitivity of the reconstruction to the inclusion of various key proxy timeseries, target SST datasets, and subjective procedural choices, with a particular focus on representing uncertainties. By some measures, the reconstruction is found skillful back to 1500 A.D., but increasing uncertainties in earlier times may limit our ability to test proposed mechanisms of mediaeval climate variability.
Kaminer, M; Pratt, H
1987-02-01
Three-channel Lissajous' trajectories (3-CLT) of the auditory brain-stem evoked potentials were recorded from 14 adult subjects in response to different frequency bands as well as to unmasked clicks. The frequency bands (8 kHz and above, 4-8 kHz, 2-4 kHz, 1-2 kHz and 1 kHz and under) were obtained by subtraction of wave forms to clicks with high-pass masking at these frequencies (derived responses). The 3-CLTs were analysed and described in terms of their geometrical measures. All 3-CLTs included 5 planar segments whose latencies progressively increased with decreasing stimulus frequency, and whose durations and orientations did not change across frequencies. Apex trajectory amplitudes as well as planar segment sizes decreased between unmasked clicks and specific frequency bands, and with decreasing frequency. The changes noted for apex latency and trajectory amplitude were paralleled with corresponding changes in amplitude and latency of single-channel records. The changes in 3-CLT measures with changes in stimulus frequency reflect the contribution of different parts of the cochlea. The unchanged measures may be attributed to the unchanged anatomy of the generators under the different stimulus conditions. The results of this study do not support the wide band of stimuli as responsible for the planarity of 3-CLT segments. In addition, these results indicate that different cochlear processes are responsible for the latency changes observed across stimulus intensities and for those associated with stimulus frequency.
Wade T. Tinkham; Alistair M. S. Smith; Chad Hoffman; Andrew T. Hudak; Michael J. Falkowski; Mark E. Swanson; Paul E. Gessler
2012-01-01
Light detection and ranging, or LiDAR, effectively produces products spatially characterizing both terrain and vegetation structure; however, development and use of those products has outpaced our understanding of the errors within them. LiDAR's ability to capture three-dimensional structure has led to interest in conducting or augmenting forest inventories with...
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Cobb, K.; Mann, M. E.; Rutherford, S. D.; Wittenberg, A. T.
2009-12-01
Since surface conditions over the tropical Pacific can organize climate variability at near-global scales, and since there is wide disagreement over their projected course under greenhouse forcing, it is of considerable interest to reconstruct their low-frequency evolution over the past millennium. To this end, we make use of the hybrid RegEM climate reconstruction technique (Mann et al. 2008; Schneider 2001), which aims to reconstruct decadal and longer-scale variations of sea-surface temperature (SST) from an array of climate proxies. We first assemble a database of published and new, high-resolution proxy data from ENSO-sensitive regions, screened for significant correlation with a common ENSO metric (NINO3 index). Proxy observations come primarily from coral, speleothem, marine and lake sediment, and ice core sources, as well as long tree-ring chronologies. The hybrid RegEM methodology is then validated within a pseudoproxy context using two coupled general circulation model simulations of the past millennium’s climate; one using the NCAR CSM1.4, the other the GFDL CM2.1, models (Ammann et al. 2007; Wittenberg 2009). Validation results are found to be sensitive to the ratio of interannual to lower-frequency variability, with poor reconstruction skill for CM2.1 but good skill for CSM1.4. The latter features prominent changes in NINO3 at decadal-to-centennial timescales, which the network and method detect relatively easily. In contrast, the unforced CM2.1 NINO3 is dominated by interannual variations, and its long-term oscillations are more difficult to reconstruct. These two limit cases bracket the observed NINO3 behavior over the historical period. We then apply the method to the proxy observations and extend the decadal-scale history of tropical Pacific SSTs over the past millennium, analyzing the sensitivity of such reconstruction to the inclusion of various key proxy timeseries and details of the statistical analysis, emphasizing metrics of uncertainty
Medical errors recovered by critical care nurses.
Dykes, Patricia C; Rothschild, Jeffrey M; Hurley, Ann C
2010-05-01
: The frequency and types of medical errors are well documented, but less is known about potential errors that were intercepted by nurses. We studied the type, frequency, and potential harm of recovered medical errors reported by critical care registered nurses (CCRNs) during the previous year. : Nurses are known to protect patients from harm. Several studies on medical errors found that there would have been more medical errors reaching the patient had not potential errors been caught earlier by nurses. : The Recovered Medical Error Inventory, a 25-item empirically derived and internally consistent (alpha =.90) list of medical errors, was posted on the Internet. Participants were recruited via e-mail and healthcare-related listservs using a nonprobability snowball sampling technique. Investigators e-mailed contacts working in hospitals or who managed healthcare-related listservs and asked the contacts to pass the link on to others with contacts in acute care settings. : During 1 year, 345 CCRNs reported that they recovered 18,578 medical errors, of which they rated 4,183 as potentially lethal. : Surveillance, clinical judgment, and interventions by CCRNs to identify, interrupt, and correct medical errors protected seriously ill patients from harm.
Banerjee, Biswanath; Walsh, Timothy F.; Aquino, Wilkins; Bonnet, Marc
2012-01-01
This paper presents the formulation and implementation of an Error in Constitutive Equations (ECE) method suitable for large-scale inverse identification of linear elastic material properties in the context of steady-state elastodynamics. In ECE-based methods, the inverse problem is postulated as an optimization problem in which the cost functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses. Furthermore, in a more recent modality of this methodology introduced by Feissel and Allix (2007), referred to as the Modified ECE (MECE), the measured data is incorporated into the formulation as a quadratic penalty term. We show that a simple and efficient continuation scheme for the penalty term, suggested by the theory of quadratic penalty methods, can significantly accelerate the convergence of the MECE algorithm. Furthermore, a (block) successive over-relaxation (SOR) technique is introduced, enabling the use of existing parallel finite element codes with minimal modification to solve the coupled system of equations that arises from the optimality conditions in MECE methods. Our numerical results demonstrate that the proposed methodology can successfully reconstruct the spatial distribution of elastic material parameters from partial and noisy measurements in as few as ten iterations in a 2D example and fifty in a 3D example. We show (through numerical experiments) that the proposed continuation scheme can improve the rate of convergence of MECE methods by at least an order of magnitude versus the alternative of using a fixed penalty parameter. Furthermore, the proposed block SOR strategy coupled with existing parallel solvers produces a computationally efficient MECE method that can be used for large scale materials identification problems, as demonstrated on a 3D example involving about 400,000 unknown moduli. Finally, our numerical results suggest that the proposed MECE
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
The phase of a frequency standard that uses periodic interrogation and control of a local oscillator (LO) is degraded by a long-term random-walk component induced by downconversion of LO noise into the loop passband. The Dick formula for the noise level of this degradation is derived from an explicit solution of an LO control-loop model.
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
The phase of a frequency standard that uses periodic interrogation and control of a local oscillator (LO) is degraded by a long-term random-walk component induced by downconversion of LO noise into the loop passband. The Dick formula for the noise level of this degradation is derived from an explicit solution of an LO control-loop model.
NASA Astrophysics Data System (ADS)
Scherrer, Philip H.
2017-08-01
This poster provides an update of the status of the efforts to understand and correct the leakage of the SDO orbit velocity into most HMI data products. The following is extracted from the abstract for the similar topic presented at the 2016 SPD meeting: “The Helioseismic and Magnetic Imager (HMI) instrument on the Solar Dynamics Observatory (SDO) measures sets of filtergrams which are converted into velocity and magnetic field maps. In addition to solar photospheric motions the velocity measurements include a direct component from the line-of-sight component of the SDO orbit. Since the magnetic field is computed as the difference between the velocity measured in left and right circular polarization, the orbit velocity is canceled only if the velocity is properly calibrated. When the orbit velocity is subtracted the remaining "solar" velocity shows a residual signal which is equal to about 2% of the c. +- 3000 m/s orbit velocity in a nearly linear relationship. This implies an error in our knowledge of some of the details of as-built filter components. This systematic error is the source of 12- and 24-hour variations in most HMI data products. While the instrument as presently calibrated (Couvidat et al. 2012 and 2016) meets all of the “Level-1” mission requirements it fails to meet the stated goal of 10 m/s accuracy for velocity data products. For the velocity measurements this has not been a significant problem since the prime HMI goals of obtaining data for helioseismology are not affected by this systematic error. However the orbit signal leaking into the magnetograms and vector magnetograms degrades the ability to accomplish some of the mission science goals at the expected levels of accuracy. This poster presents the current state of understanding of the source of this systematic error and prospects for near term improvement in the accuracy of the filter profile model.”
Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956
ERIC Educational Resources Information Center
Kearsley, Greg P.
This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…
NASA Astrophysics Data System (ADS)
Krueger, Tobias; Inman, Alex; Paling, Nick
2014-05-01
Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at
NASA Technical Reports Server (NTRS)
Huang, Dong; Yang, Wenze; Tan, Bin; Rautiainen, Miina; Zhang, Ping; Hu, Jiannan; Shabanov, Nikolay V.; Linder, Sune; Knyazikhin, Yuri; Myneni, Ranga B.
2006-01-01
The validation of moderate-resolution satellite leaf area index (LAI) products such as those operationally generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data requires reference LAI maps developed from field LAI measurements and fine-resolution satellite data. Errors in field measurements and satellite data determine the accuracy of the reference LAI maps. This paper describes a method by which reference maps of known accuracy can be generated with knowledge of errors in fine-resolution satellite data. The method is demonstrated with data from an international field campaign in a boreal coniferous forest in northern Sweden, and Enhanced Thematic Mapper Plus images. The reference LAI map thus generated is used to assess modifications to the MODIS LAI/fPAR algorithm recently implemented to derive the next generation of the MODIS LAI/fPAR product for this important biome type.
NASA Technical Reports Server (NTRS)
Huang, Dong; Yang, Wenze; Tan, Bin; Rautiainen, Miina; Zhang, Ping; Hu, Jiannan; Shabanov, Nikolay V.; Linder, Sune; Knyazikhin, Yuri; Myneni, Ranga B.
2006-01-01
The validation of moderate-resolution satellite leaf area index (LAI) products such as those operationally generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data requires reference LAI maps developed from field LAI measurements and fine-resolution satellite data. Errors in field measurements and satellite data determine the accuracy of the reference LAI maps. This paper describes a method by which reference maps of known accuracy can be generated with knowledge of errors in fine-resolution satellite data. The method is demonstrated with data from an international field campaign in a boreal coniferous forest in northern Sweden, and Enhanced Thematic Mapper Plus images. The reference LAI map thus generated is used to assess modifications to the MODIS LAI/fPAR algorithm recently implemented to derive the next generation of the MODIS LAI/fPAR product for this important biome type.
Matthews, Danielle E; Theakston, Anna L
2006-11-12
How do English-speaking children inflect nouns for plurality and verbs for the past tense? We assess theoretical answers to this question by considering errors of omission, which occur when children produce a stem in place of its inflected counterpart (e.g., saying "dress" to refer to 5 dresses). A total of 307 children (aged 3;11-9;9) participated in 3 inflection studies. In Study 1, we show that errors of omission occur until the age of 7 and are more likely with both sibilant regular nouns (e.g., dress) and irregular nouns (e.g., man) than regular nouns (e.g., dog). Sibilant nouns are more likely to be inflected if they are high frequency. In Studies 2 and 3, we show that similar effects apply to the inflection of verbs and that there is an advantage for "regular-like" irregulars whose inflected form, but not stem form, ends in d/t. The results imply that (a) stems and inflected forms compete for production and (b) children generalize both product-oriented and source-oriented schemas when learning about inflectional morphology.
Baron, Paul; Deckers, Roel; de Greef, Martijn; Merckel, Laura G; Bakker, Chris J G; Bouwman, Job G; Bleys, Ronald L A W; van den Bosch, Maurice A A J; Bartels, Lambertus W
2014-12-01
In this study, we aim to demonstrate the sensitivity of proton resonance frequency shift (PRFS) -based thermometry to heat-induced magnetic susceptibility changes and to present and evaluate a model-based correction procedure. To demonstrate the expected temperature effect, field disturbances during high intensity focused ultrasound sonications were monitored in breast fat samples with a three-dimensional (3D) gradient echo sequence. To evaluate the correction procedure, the interface of tissue-mimicking ethylene glycol gel and fat was sonicated. During sonication, the temperature was monitored with a 2D dual flip angle multi-echo gradient echo sequence, allowing for PRFS-based relative and referenced temperature measurements in the gel and T1 -based temperature measurements in fat. The PRFS-based measurement in the gel was corrected by minimizing the discrepancy between the observed 2D temperature profile and the profile predicted by a 3D thermal model. The HIFU sonications of breast fat resulted in a magnetic field disturbance which completely disappeared after cooling. For the correction method, the 5th to 95th percentile interval of the PRFS-thermometry error in the gel decreased from 3.8°C before correction to 2.0-2.3°C after correction. This study has shown the effects of magnetic susceptibility changes induced by heating of breast fatty tissue samples. The resultant errors can be reduced by the use of a model-based correction procedure. © 2013 Wiley Periodicals, Inc.
An analysis of the effects of secondary reflections on dual-frequency reflectometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.; Cockrell, C. R.; Harrah, S. D.
1990-01-01
The error-producing mechanism involving secondary reflections in a dual-frequency, distance measuring reflectometer is examined analytically. Equations defining the phase, and hence distance, error are derived. The error-reducing potential of frequency-sweeping is demonstrated. It is shown that a single spurious return can be completely nullified by optimizing the sweep width.
NASA Technical Reports Server (NTRS)
Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.
2007-01-01
Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross
Sasagane, Kotoku
2008-09-17
The essence of the quasienergy derivative (QED) method and calculations of the frequency-dependent hyperpolarizabilities based on the QED method will be presented in the first half. Our recent and up-to-date development and some possibilities concerning the QED method will be explained later. At the end of the lecture whether the extension of the QED method to the numerical approach is possible or not will be investigated.
TOA/FOA geolocation error analysis.
Mason, John Jeffrey
2008-08-01
This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.
NASA Technical Reports Server (NTRS)
Poulain, Pierre-Marie; Luther, Douglas S.; Patzert, William C.
1992-01-01
Two techniques were developed for estimating statistics of inertial oscillations from satellite-tracked drifters that overcome the difficulties inherent in estimating such statistics from data dependent upon space coordinates that are a function of time. Application of these techniques to tropical surface drifter data collected during the NORPAX, EPOCS, and TOGA programs reveals a latitude-dependent, statistically significant 'blue shift' of inertial wave frequency. The latitudinal dependence of the blue shift is similar to predictions based on 'global' internal-wave spectral models, with a superposition of frequency shifting due to modification of the effective local inertial frequency by the presence of strongly sheared zonal mean currents within 12 deg of the equator.
Ammerman, Brooke A; Jacobucci, Ross; Kleiman, Evan M; Muehlenkamp, Jennifer J; McCloskey, Michael S
2017-02-01
Research suggesting nonsuicidal self-injury (NSSI) may belong in a distinct diagnostic category has led to the inclusion of NSSI disorder in the DSM-5 section for future study. There has been limited research, however, examining the validity of Criterion A (the frequency criterion). The current study aimed to examine the validity of the frequency criterion of NSSI disorder through the use of an exploratory data mining method, structural equation modeling trees, as a way to determine a NSSI frequency that optimally discriminates pathological NSSI from normative behavior among undergraduate students (n = 3,559), 428 who engaged in NSSI in the previous year. The model included psychopathology symptomology found to be comorbid with NSSI and cognitive-affective deficits commonly associated with NSSI. Results demonstrated a first split between individuals with 0 and 1 act of NSSI in the past year, as was expected. Among individuals with 1 or more previous acts, the optimal split was between those with 5 and 6 NSSI acts in the past year. Results from the current study suggest that individuals with 6 acts of NSSI in past year, compared with those with 5 acts or less, may represent a more severe group of self-injurers. These individuals reported higher levels of related psychopathology symptomology and cognitive-affective deficits, in addition to decreased quality of life. Findings have potential implications for the proposed frequency criteria of NSSI disorder and how pathological NSSI is characterized. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Sugiartono Prabowo, Billy; Verdhora Ry, Rexha; Nugraha, Andri Dian; Siska, Katrine
2017-04-01
Hydrocarbon Microtremor Analysis is a low-frequency passive seismic method which derives a quick look estimates new hydrocarbon reservoir prospect area. This method based on the empirical study which investigated an increasing of spectra anomaly between 2 - 4 Hz above the reservoir. We determined five attributes on low-frequency band of microtremors including Power Spectral Density integral of vertical component (PSD-IZ), Power Spectral Density (PSD) on 3 Hz frequency, frequency shifting, the spectral ratio of vertical and horizontal components (V/H) maximum and integral of spectral ratio of vertical and horizontal components (V/H). We deployed 105 points of measurement spreading in our suspect area. We used time series data that recorded from particle velocity of three components with 80 minutes duration and 100 Hz of the sampling frequency. The noise identification analysis in each station data set has been made from the measurement location, considering the suspect area had different local cultural noise. We proceed attributes for each data acquired from all station then used the interpolated map using a standard kriging algorithm spatially. As a result, each attribute analysis and spatial attribute map are combined to identify and estimate a good prospect of the hydrocarbon reservoir.
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Tarasenko, Nicholas; Lane, Steven
2017-01-01
Since October 2015, NASA Glenn Research Center (GRC) and the Air Force Research Laboratory (AFRL) have collaboratively operated an RF terrestrial link in Albuquerque, New Mexico to characterize atmospheric propagation phenomena at 72 and 84 GHz. The W/V-band Terrestrial Link Experiment (WTLE) consists of coherent transmitters at each frequency on the crest of the Sandia Mountains and a corresponding pair of receivers in south Albuquerque. The beacon receivers provide a direct measurement of the link attenuation, while concurrent weather instrumentation provides a measurement of the atmospheric conditions. Among the available weather instruments is an optical disdrometer which yields an optical measurement of rain rate, as well as droplet size and velocity distributions (DSD, DVD). In particular, the DSD can be used to derive an instantaneous scaling factor (ISF) by which the measured data at one frequency can be scaled to another - for example, scaling the 72 GHz to an expected 84 GHz timeseries. Given the availability of both the DSD prediction and the directly observed 84 GHz attenuation, WTLE is thus uniquely able assess DSD-derived instantaneous frequency scaling at the V/W-bands. Previous work along these lines has investigated the DSD-derived ISF at Ka and Q-band (20 GHz to 40 GHz) using a satellite beacon receiver experiment in Milan, Italy. This work will expand the investigation to terrestrial links in the V/W-bands, where the frequency scaling factor is lower and where the link is also much more sensitive to attenuation by rain, clouds, and other atmospheric effects.
NASA Astrophysics Data System (ADS)
Torregrosa, A.; Flint, L. E.; Flint, A. L.; Peters, J.; Combs, C.
2014-12-01
Coastal fog modifies the hydrodynamic and thermodynamic properties of California watersheds with the greatest impact to ecosystem functioning during arid summer months. Lowered maximum temperatures resulting from inland penetration of marine fog are probably adequate to capture fog effects on thermal land surface characteristics however the hydrologic impact from lowered rates of evapotranspiration due to shade, fog drip, increased relative humidity, and other factors associated with fog events are more difficult to gauge. Fog products, such as those derived from National Weather Service Geostationary Operational Environmental Satellite (GOES) imagery, provide high frequency (up to 15 min) views of fog and low cloud cover and can potentially improve water balance models. Even slight improvements in water balance calculations can benefit urban water managers and agricultural irrigation. The high frequency of GOES output provides the opportunity to explore options for integrating fog frequency data into water balance models. This pilot project compares GOES-derived fog frequency intervals (6, 12 and 24 hour) to explore the most useful for water balance models and to develop model-relevant relationships between climatic and water balance variables. Seasonal diurnal thermal differences, plant ecophysiological processes, and phenology suggest that a day/night differentiation on a monthly basis may be adequate. To explore this hypothesis, we examined discharge data from stream gages and outputs from the USGS Basin Characterization Model for runoff, recharge, potential evapotranspiration, and actual evapotranspiration for the Russian River Watershed under low, medium, and high fog event conditions derived from hourly GOES imagery (1999-2009). We also differentiated fog events into daytime and nighttime versus a 24-hour compilation on a daily, monthly, and seasonal basis. Our data suggest that a daily time-step is required to adequately incorporate the hydrologic effect of
1990-09-01
field, z - (q + r)7T/2 a,, a2 - material parameter, relation of affinity to the augmenting thermodynamic field a 3 - coupling between two ATFs 0T...frequency range, coupled material constitutive ix relations are developed using the concept of augmenting thermodynamic fields, with non-integer...material, for which the relation of stress to strain is defined by Hooke’s law a = EE , where E is called the modulus of elasticity. For this
Mapping the montane cloud forest of Taiwan using 12 year MODIS-derived ground fog frequency data
Li, Ching-Feng; Thies, Boris; Chang, Shih-Chieh; Bendix, Jörg
2017-01-01
Up until now montane cloud forest (MCF) in Taiwan has only been mapped for selected areas of vegetation plots. This paper presents the first comprehensive map of MCF distribution for the entire island. For its creation, a Random Forest model was trained with vegetation plots from the National Vegetation Database of Taiwan that were classified as “MCF” or “non-MCF”. This model predicted the distribution of MCF from a raster data set of parameters derived from a digital elevation model (DEM), Landsat channels and texture measures derived from them as well as ground fog frequency data derived from the Moderate Resolution Imaging Spectroradiometer. While the DEM parameters and Landsat data predicted much of the cloud forest’s location, local deviations in the altitudinal distribution of MCF linked to the monsoonal influence as well as the Massenerhebung effect (causing MCF in atypically low altitudes) were only captured once fog frequency data was included. Therefore, our study suggests that ground fog data are most useful for accurately mapping MCF. PMID:28245279
Mapping the montane cloud forest of Taiwan using 12 year MODIS-derived ground fog frequency data.
Schulz, Hans Martin; Li, Ching-Feng; Thies, Boris; Chang, Shih-Chieh; Bendix, Jörg
2017-01-01
Up until now montane cloud forest (MCF) in Taiwan has only been mapped for selected areas of vegetation plots. This paper presents the first comprehensive map of MCF distribution for the entire island. For its creation, a Random Forest model was trained with vegetation plots from the National Vegetation Database of Taiwan that were classified as "MCF" or "non-MCF". This model predicted the distribution of MCF from a raster data set of parameters derived from a digital elevation model (DEM), Landsat channels and texture measures derived from them as well as ground fog frequency data derived from the Moderate Resolution Imaging Spectroradiometer. While the DEM parameters and Landsat data predicted much of the cloud forest's location, local deviations in the altitudinal distribution of MCF linked to the monsoonal influence as well as the Massenerhebung effect (causing MCF in atypically low altitudes) were only captured once fog frequency data was included. Therefore, our study suggests that ground fog data are most useful for accurately mapping MCF.
Verginadis, Ioannis I; Simos, Yannis V; Velalopoulou, Anastasia P; Vadalouca, Athina N; Kalfakakou, Vicky P; Karkabounas, Spyridon Ch; Evangelou, Angelos M
2012-12-01
Exposure to various types of electromagnetic fields (EMFs) affects pain specificity (nociception) and pain inhibition (analgesia). Previous study of ours has shown that exposure to the resonant spectra derived from biologically active substances' NMR may induce to live targets the same effects as the substances themselves. The purpose of this study is to investigate the potential analgesic effect of the resonant EMFs derived from the NMR spectrum of morphine. Twenty five Wistar rats were divided into five groups: control group; intraperitoneal administration of morphine 10 mg/kg body wt; exposure of rats to resonant EMFs of morphine; exposure of rats to randomly selected non resonant EMFs; and intraperitoneal administration of naloxone and simultaneous exposure of rats to the resonant EMFs of morphine. Tail Flick and Hot Plate tests were performed for estimation of the latency time. Results showed that rats exposed to NMR spectrum of morphine induced a significant increase in latency time at time points (p < 0.05), while exposure to the non resonant random EMFs exerted no effects. Additionally, naloxone administration inhibited the analgesic effects of the NMR spectrum of morphine. Our results indicate that exposure of rats to the resonant EMFs derived from the NMR spectrum of morphine may exert on animals similar analgesic effects to morphine itself.
A genome signature derived from the interplay of word frequencies and symbol correlations
NASA Astrophysics Data System (ADS)
Möller, Simon; Hameister, Heike; Hütt, Marc-Thorsten
2014-11-01
Genome signatures are statistical properties of DNA sequences that provide information on the underlying species. It is not understood, how such species-discriminating statistical properties arise from processes of genome evolution and from functional properties of the DNA. Investigating the interplay of different genome signatures can contribute to this understanding. Here we analyze the statistical dependences of two such genome signatures: word frequencies and symbol correlations at short and intermediate distances. We formulate a statistical model of word frequencies in DNA sequences based on the observed symbol correlations and show that deviations of word counts from this correlation-based null model serve as a new genome signature. This signature (i) performs better in sorting DNA sequence segments according to their species origin and (ii) reveals unexpected species differences in the composition of microsatellites, an important class of repetitive DNA. While the first observation is a typical task in metagenomics projects and therefore an important benchmark for a genome signature, the latter suggests strong species differences in the biological mechanisms of genome evolution. On a more general level, our results highlight that the choice of null model (here: word abundances computed via symbol correlations rather than shorter word counts) substantially affects the interpretation of such statistical signals.
NASA Technical Reports Server (NTRS)
Donegan, James J; Robinson, Samuel W , Jr; Gates, Ordway, B , jr
1955-01-01
A method is presented for determining the lateral-stability derivatives, transfer-function coefficients, and the modes for lateral motion from frequency-response data for a rigid aircraft. The method is based on the application of the vector technique to the equations of lateral motion, so that the three equations of lateral motion can be separated into six equations. The method of least squares is then applied to the data for each of these equations to yield the coefficients of the equations of lateral motion from which the lateral-stability derivatives and lateral transfer-function coefficients are computed. Two numerical examples are given to demonstrate the use of the method.
DERIVATION OF THE MAGNETIC FIELD IN A CORONAL MASS EJECTION CORE VIA MULTI-FREQUENCY RADIO IMAGING
Tun, Samuel D.; Vourlidas, A.
2013-04-01
The magnetic field within the core of a coronal mass ejection (CME) on 2010 August 14 is derived from analysis of multi-wavelength radio imaging data. This CME's core was found to be the source of a moving type IV radio burst, whose emission is here determined to arise from the gyrosynchrotron process. The CME core's true trajectory, electron density, and line-of-sight depth are derived from stereoscopic observations, constraining these parameters in the radio emission models. We find that the CME carries a substantial amount of mildly relativistic electrons (E < 100 keV) in a strong magnetic field (B < 15 G), and that the spectra at lower heights are preferentially suppressed at lower frequencies through absorption from thermal electrons. We discuss the results in light of previous moving type IV burst studies, and outline a plan for the eventual use of radio methods for CME magnetic field diagnostics.
NASA Astrophysics Data System (ADS)
Usuki, Tsuneo
2013-09-01
The moduli of conventional elastic structural materials are extended to one of the viscoelastic materials through a modification whereby the dynamic moduli converge to the static moduli of elasticity as the fractional order approaches zero. By plotting phase velocity curves and group velocity curves of plane waves and Rayleigh surface wave for a viscoelastic material (polyvinyl chloride foam), the influence of the fractional order of viscoelasticity is examined. The phase and group velocity curves in the high frequency range were derived for longitudinal, transverse, and Rayleigh waves inherent to the viscoelastic material. In addition, the equation for the phase velocity was mathematically derived on the complex plane, too, and graphically illustrated. A phenomenon was found that, at the moment when the fractional order of the time derivative reaches an integer value 1, the curve on the complex plane becomes completely different, exhibiting snap-through behavior. We examined the mechanism of the snap-through mathematically. Numerical calculation examples were solved, and good agreement was confirmed between the numerical calculation and the analytical expression mentioned above. From the results of the numerical example, regularities were derived for the absolute value of the complex phase and group velocities on the complex plane.
CORRELATED ERRORS IN EARTH POINTING MISSIONS
NASA Technical Reports Server (NTRS)
Bilanow, Steve; Patt, Frederick S.
2005-01-01
Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor
NASA Astrophysics Data System (ADS)
El-Dardiry, Hisham Abd El-Kareem
The Radar-based Quantitative Precipitation Estimates (QPE) is one of the NEXRAD products that are available in a high temporal and spatial resolution compared with gauges. Radar-based QPEs have been widely used in many hydrological and meteorological applications; however, a few studies have focused on using radar QPE products in deriving of Precipitation Frequency Estimates (PFE). Accurate and regionally specific information on PFE is critically needed for various water resources engineering planning and design purposes. This study focused first on examining the data quality of two main radar products, the near real-time Stage IV QPE product, and the post real-time RFC/MPE product. Assessment of the Stage IV product showed some alarming data artifacts that contaminate the identification of rainfall maxima. Based on the inter-comparison analysis of the two products, Stage IV and RFC/MPE, the latter was selected for the frequency analysis carried out throughout the study. The precipitation frequency analysis approach used in this study is based on fitting Generalized Extreme Value (GEV) distribution as a statistical model for the hydrologic extreme rainfall data that based on Annual Maximum Series (AMS) extracted from 11 years (2002-2012) over a domain covering Louisiana. The parameters of the GEV model are estimated using method of linear moments (L-moments). Two different approaches are suggested for estimating the precipitation frequencies; Pixel-Based approach, in which PFEs are estimated at each individual pixel and Region-Based approach in which a synthetic sample is generated at each pixel by using observations from surrounding pixels. The region-based technique outperforms the pixel based estimation when compared with results obtained by NOAA Atlas 14; however, the availability of only short record of observations and the underestimation of radar QPE for some extremes causes considerable reduction in precipitation frequencies in pixel-based and region
Kumar Sahu, Rabindra; Panda, Sidhartha; Biswal, Ashutosh; Chandra Sekhar, G T
2016-03-01
In this paper, a novel Tilt Integral Derivative controller with Filter (TIDF) is proposed for Load Frequency Control (LFC) of multi-area power systems. Initially, a two-area power system is considered and the parameters of the TIDF controller are optimized using Differential Evolution (DE) algorithm employing an Integral of Time multiplied Absolute Error (ITAE) criterion. The superiority of the proposed approach is demonstrated by comparing the results with some recently published heuristic approaches such as Firefly Algorithm (FA), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) optimized PID controllers for the same interconnected power system. Investigations reveal that proposed TIDF controllers provide better dynamic response compared to PID controller in terms of minimum undershoots and settling times of frequency as well as tie-line power deviations following a disturbance. The proposed approach is also extended to two widely used three area test systems considering nonlinearities such as Generation Rate Constraint (GRC) and Governor Dead Band (GDB). To improve the performance of the system, a Thyristor Controlled Series Compensator (TCSC) is also considered and the performance of TIDF controller in presence of TCSC is investigated. It is observed that system performance improves with the inclusion of TCSC. Finally, sensitivity analysis is carried out to test the robustness of the proposed controller by varying the system parameters, operating condition and load pattern. It is observed that the proposed controllers are robust and perform satisfactorily with variations in operating condition, system parameters and load pattern. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Keightley, Peter D.; Campos, José L.; Booker, Tom R.; Charlesworth, Brian
2016-01-01
Many approaches for inferring adaptive molecular evolution analyze the unfolded site frequency spectrum (SFS), a vector of counts of sites with different numbers of copies of derived alleles in a sample of alleles from a population. Accurate inference of the high-copy-number elements of the SFS is difficult, however, because of misassignment of alleles as derived vs. ancestral. This is a known problem with parsimony using outgroup species. Here we show that the problem is particularly serious if there is variation in the substitution rate among sites brought about by variation in selective constraint levels. We present a new method for inferring the SFS using one or two outgroups that attempts to overcome the problem of misassignment. We show that two outgroups are required for accurate estimation of the SFS if there is substantial variation in selective constraints, which is expected to be the case for nonsynonymous sites in protein-coding genes. We apply the method to estimate unfolded SFSs for synonymous and nonsynonymous sites in a population of Drosophila melanogaster from phase 2 of the Drosophila Population Genomics Project. We use the unfolded spectra to estimate the frequency and strength of advantageous and deleterious mutations and estimate that ∼50% of amino acid substitutions are positively selected but that <0.5% of new amino acid mutations are beneficial, with a scaled selection strength of Nes ≈ 12. PMID:27098912
Shallow water sediment properties derived from high-frequency shear and interface waves
NASA Astrophysics Data System (ADS)
Ewing, John; Carter, Jerry A.; Sutton, George H.; Barstow, Noel
1992-04-01
Low-frequency sound propagation in shallow water environments is not restricted to the water column but also involves the subbottom. Thus, as well as being important for geophysical description of the seabed, subbottom velocity/attenuation structure is essential input for predictive propagation models. To estimate this structure, bottom-mounted sources and receivers were used to make measurements of shear and compressional wave propagation in shallow water sediments of the continental shelf, usually where boreholes and high-resolution reflection profiles give substantial supporting geologic information about the subsurface. This colocation provides an opportunity to compare seismically determined estimates of physical properties of the seabed with the "ground truth" properties. Measurements were made in 1986 with source/detector offsets up to 200 m producing shear wave velocity versus depth profiles of the upper 30-50 m of the seabed (and P wave profiles to lesser depths). Measurements in 1988 were made with smaller source devices designed to emphasize higher frequencies and recorded by an array of 30 sensors spaced at 1-m intervals to improve spatial sampling and resolution of shallow structure. These investigations with shear waves have shown that significant lateral and vertical variations in the physical properties of the shallow seabed are common and are principally created by erosional and depositional processes associated with glacial cycles and sea level oscillations during the Quaternary. When the seabed structure is relatively uniform over the length of the profiles, the shear wave fields are well ordered, and the matching of the data with full waveform synthetics has been successful, producing velocity/attenuation models consistent with the subsurface lithology indicated by coring results. Both body waves and interface waves have been modeled for velocity/attenuation as a function of depth with the aid of synthetic seismograms and other analytical
Kachman, S D; Van Vleck, L D
2007-10-01
The multiple-trait derivative-free REML set of programs was written to handle partially missing data for multiple-trait analyses as well as single-trait models. Standard errors of genetic parameters were reported for univariate models and for multiple-trait analyses only when all traits were measured on animals with records. In addition to estimating (co)variance components for multiple-trait models with partially missing data, this paper shows how the multiple-trait derivative-free REML set of programs can also estimate SE by augmenting the data file when not all animals have all traits measured. Although the standard practice has been to eliminate records with partially missing data, that practice uses only a subset of the available data. In some situations, the elimination of partial records can result in elimination of all the records, such as one trait measured in one environment and a second trait measured in a different environment. An alternative approach requiring minor modifications of the original data and model was developed that provides estimates of the SE using an augmented data set that gives the same residual log likelihood as the original data for multiple-trait analyses when not all traits are measured. Because the same residual vector is used for the original data and the augmented data, the resulting REML estimators along with their sampling properties are identical for the original and augmented data, so that SE for estimates of genetic parameters can be calculated.
Upper Ocean Salinity Stratification in the Tropics As Derived from the Buoyancy Frequency N2
NASA Astrophysics Data System (ADS)
Maes, C.; O'Kane, T.
2014-12-01
The idea that salinity contributes to ocean dynamics is simply common sense in physical oceanography. Along with temperature, salinity determines the ocean mass and hence, through geostrophy, influences ocean dynamics and currents. But, in the Tropics, salinity effects have generally been neglected. Nevertheless, observational studies of the western Pacific Ocean have suggested since the mid-1980s that the barrier layer resulting from the ocean salinity stratification within the mixed layer could influence significantly the ocean-atmosphere interactions. The present study aims to isolate the specific role of the salinity stratification in the layers above the main pycnocline by taking into account the respective thermal and saline dependencies in the Brunt-Vaisala frequency, N2. Results will show that the haline stabilizing effect may contribute to 40-50% in N2 as compared to the thermal stratification and, in some specific regions, exceeds it for a few months of the seasonal cycle. At the contrary, the centers of action of the subtropical gyres are characterized by the permanent absence of such effect. The relationships between the stabilizing effect and the sea surface fields such as SSS and SST are shown to be well defined and quasilinear in the Tropics, providing some indication that in the future, analyses that consider both satellite surface salinity measurements at the surface and vertical profiles at depth will result in a better determination of the role of the salinity stratification in climate prediction systems.
NASA Technical Reports Server (NTRS)
Palumbo, Dan
2008-01-01
The lifetimes of coherent structures are derived from data correlated over a 3 sensor array sampling streamwise sidewall pressure at high Reynolds number (> 10(exp 8)). The data were acquired at subsonic, transonic and supersonic speeds aboard a Tupolev Tu-144. The lifetimes are computed from a variant of the correlation length termed the lifelength. Characteristic lifelengths are estimated by fitting a Gaussian distribution to the sensors cross spectra and are shown to compare favorably with Efimtsov s prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distributions, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data are converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize the behavior of coherent structures in the turbulent boundary layer.
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2015-12-01
Intensity-Duration-Frequency (IDF) curves are widely used in flood risk management because they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. Weather radars provide distributed rainfall estimates with high spatial and temporal resolutions and overcome the scarce representativeness of point-based rainfall for regions characterized by large gradients in rainfall climatology. This work explores the use of radar quantitative precipitation estimation (QPE) for the identification of IDF curves over a region with steep climatic transitions (Israel) using a unique radar data record (23 yr) and combined physical and empirical adjustment of the radar data. IDF relationships were derived by fitting a generalized extreme value distribution to the annual maximum series for durations of 20 min, 1 h and 4 h. Arid, semi-arid and Mediterranean climates were explored using 14 study cases. IDF curves derived from the study rain gauges were compared to those derived from radar and from nearby rain gauges characterized by similar climatology, taking into account the uncertainty linked with the fitting technique. Radar annual maxima and IDF curves were generally overestimated but in 70% of the cases (60% for a 100 yr return period), they lay within the rain gauge IDF confidence intervals. Overestimation tended to increase with return period, and this effect was enhanced in arid climates. This was mainly associated with radar estimation uncertainty, even if other effects, such as rain gauge temporal resolution, cannot be neglected. Climatological classification remained meaningful for the analysis of rainfall extremes and radar was able to discern climatology from rainfall frequency analysis.
High frequency solar influence revealed in sclerosponge-derived Caribbean SST record
NASA Astrophysics Data System (ADS)
Estrella, J.; Winter, A.; Sherman, C.; Mangini, A.
2012-12-01
We present a high-resolution (annual) record of the Caribbean mixed layer temperature at different depths derived from oxygen isotopic ratios obtained from the sclerosponge Ceratoporella nicholsoni. Sclerosponges precipitate their calcium carbonate skeleton in equilibrium with their surrounding environment and are capable of living at depths down to 200 m. The sponges for this project were collected off the coasts of Puerto Rico and the US Virgin Islands in northeastern Caribbean Sea. The records obtained extend from the early 1500's to the present and suggest that the Northeastern Caribbean was 1 - 2 °C cooler during the Little Ice Age than present conditions and that sea surface temperature (SST) has been rising at an average linear rate of 0.009 °C yr-1 since the mid 1800's, three times faster than the World Ocean. Wavelet time series analysis of our records suggests that Caribbean SST variability is regulated by the sunspot cycle, especially when the total solar irradiance is high, at what time the SSTs and the sunspot cycle are highly coupled. Our findings suggest a SST response to solar influence of 0.40 °C (W/m2)-1, almost twice that of the World Ocean. Deceleration of the Caribbean Current is proposed as a possible reason for this disparity. Further work is currently being done on other sponges and other calcium carbonate proxies to examine the extension of this forcing in other climate phenomena.
NASA Astrophysics Data System (ADS)
Eaton, Frank D.; Nastrom, Gregory D.; Hansen, Anthony R.
1999-02-01
Slant path calculations are shown of the transverse coherence length (r0), the isoplanatic angle ((theta) 0), and the Rytov variance ((sigma) 2R), using a 6- yr data set of refractive index structure parameter (C2n) from 49.25-MHz radar observations at White Sands Missile Range, New Mexico. The calculations are for a spherical wave condition; a wavelength ((lambda) ) of electromagnetic radiation of 1 micrometers ; four different elevation angles (3, 10, 30, and 60 deg), two path lengths (50 and 150 km); and a platform, such as an aircraft, at 12.5 km MSL (mean sea level). Over 281,000 radar-derived C2n profiles sampled at 3 min intervals with 150-m height resolution are used for the calculations. The approach, an `onion skin' model, assumes horizontal stationarity over each entire propagation path and is consistent with Taylor's hypothesis. The results show that refractivity turbulence effects are greatly reduced for the there propagation parameters (r0, (theta) 0, and (sigma) 2R) as the elevation angle increases from 3 to 60 deg. A pronounced seasonal effect is seen on the same parameters, which is consistent with climatological variables and gravity wave activity. Interactions with the enhanced turbulence in the vicinity of the tropopause with the range weighting functions of each propagation parameter is evaluated. Results of a two region model relating r0, (theta) 0, and (sigma) 2R to wind speed at 5.6 km MSL are shown. This statistical model can be understood in terms of upward propagating gravity waves that are launched by strong winds over complex terrain.
Zhou, Qifa; Cannata, Jonathan M; Meyer, Richard J; van Tol, David J; Tadigadapa, Srinivas; Hughes, W Jack; Shung, K Kirk; Trolier-McKinstry, Susan
2005-03-01
Miniaturized tonpilz transducers are potentially useful for ultrasonic imaging in the 10 to 100 MHz frequency range due to their higher efficiency and output capabilities. In this work, 4 to 10-microm thick piezoelectric thin films were used as the active element in the construction of miniaturized tonpilz structures. The tonpilz stack consisted of silver/lead zirconate titanate (PZT)/lanthanum nickelate (LaNiO3)/silicon on insulator (SOI) substrates. First, conductive LaNiO3 thin films, approximately 300 nm in thickness, were grown on SOI substrates by a metalorganic decomposition (MOD) method. The room temperature resistivity of the LaNiO3 was 6.5 x 10(-6) omega x m. Randomly oriented PZT (52/48) films up to 7-microm thick were then deposited using a sol-gel process on the LaNiO3-coated SOI substrates. The PZT films with LaNiO3 bottom electrodes showed good dielectric and ferroelectric properties. The relative dielectric permittivity (at 1 kHz) was about 1030. The remanent polarization of PZT films was larger than 26 microC/cm2. The effective transverse piezoelectric e31,f coefficient of PZT thick films was about -6.5 C/m2 when poled at -75 kV/cm for 15 minutes at room temperature. Enhanced piezoelectric properties were obtained on poling the PZT films at higher temperatures. A silver layer about 40-microm thick was prepared by silver powder dispersed in epoxy and deposited onto the PZT film to form the tail mass of the tonpilz structure. The top layers of this wafer were subsequently diced with a saw, and the structure was bonded to a second wafer. The original silicon carrier wafer was polished and etched using a Xenon difluoride (XeF2) etching system. The resulting structures showed good piezoelectric activity. This process flow should enable integration of the piezoelectric elements with drive/receive electronics.
Quantum error correction for continuously detected errors
NASA Astrophysics Data System (ADS)
Ahn, Charlene; Wiseman, H. M.; Milburn, G. J.
2003-05-01
We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber et al., Phys. Rev. Lett. 86, 4402 (2001)].
Jones, Timothy D; Chappell, Nick A; Tych, Wlodek
2014-11-18
The first dynamic model of dissolved organic carbon (DOC) export in streams derived directly from high frequency (subhourly) observations sampled at a regular interval through contiguous storms is presented. The optimal model, identified using the recently developed RIVC algorithm, captured the rapid dynamics of DOC load from 15 min monitored rainfall with high simulation efficiencies and constrained uncertainty with a second-order (two-pathway) structure. Most of the DOC export in the four headwater basins studied was associated with the faster hydrometric pathway (also modeled in parallel), and was soon exhausted in the slower pathway. A delay in the DOC mobilization became apparent as the ambient temperatures increased. These features of the component pathways were quantified in the dynamic response characteristics (DRCs) identified by RIVC. The model and associated DRCs are intended as a foundation for a better understanding of storm-related DOC dynamics and predictability, given the increasing availability of subhourly DOC concentration data.
Schurr, T G; Ballinger, S W; Gan, Y Y; Hodge, J A; Merriwether, D A; Lawrence, D N; Knowler, W C; Weiss, K M; Wallace, D C
1990-01-01
The mitochondrial DNA (mtDNA) sequence variation of the South American Ticuna, the Central American Maya, and the North American Pima was analyzed by restriction-endonuclease digestion and oligonucleotide hybridization. The analysis revealed that Amerindian populations have high frequencies of mtDNAs containing the rare Asian RFLP HincII morph 6, a rare HaeIII site gain, and a unique AluI site gain. In addition, the Asian-specific deletion between the cytochrome c oxidase subunit II (COII) and tRNA(Lys) genes was also prevalent in both the Pima and the Maya. These data suggest that Amerindian mtDNAs derived from at least four primary maternal lineages, that new tribal-specific variants accumulated as these mtDNAs became distributed throughout the Americas, and that some genetic variation may have been lost when the progenitors of the Ticuna separated from the North and Central American populations. Images Figure 1 PMID:1968708
Madani, Nima; Kimball, John S.; Nazeri, Mona; Kumar, Lalit; Affleck, David L. R.
2016-01-01
Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m-3) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species’ ecological habitat niche across Australia. PMID:26799732
Razavi, Shahnaz; Salimi, Marzieh; Shahbazi-Gahrouei, Daryoush; Karbasi, Saeed; Kermani, Saeed
2014-01-01
Background: Extremely low-frequency electromagnetic fields (ELF-EMF) can effect on biological systems and alters some cell functions like proliferation rate. Therefore, we aimed to attempt the evaluation effect of ELF-EMF on the growth of human adipose derived stem cells (hADSCs). Materials and Methods: ELF-EMF was generated by a system including autotransformer, multi-meter, solenoid coils, teslameter and its probe. We assessed the effect of ELF-EMF with intensity of 0.5 and 1 mT and power line frequency 50 Hz on the survival of hADSCs for 20 and 40 min/day for 7 days by MTT assay. One-way analysis of variance was used to assessment the significant differences in groups. Results: ELF-EMF has maximum effect with intensity of 1 mT for 20 min/day on proliferation of hADSCs. The survival and proliferation effect (PE) in all exposure groups were significantly higher than that in sham groups (P < 0.05) except in group of 1 mT and 40 min/day. Conclusion: Our results show that between 0.5 m and 1 mT ELF-EMF could be enhances survival and PE of hADSCs conserving the duration of exposure. PMID:24592372
Madani, Nima; Kimball, John S; Nazeri, Mona; Kumar, Lalit; Affleck, David L R
2016-01-01
Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m(-3)) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species' ecological habitat niche across Australia.
Nightingale, Kathryn R.; Rouze, Ned C.; Rosenzweig, Stephen J.; Wang, Michael H.; Abdelmalek, Manal F.; Guy, Cynthia D.; Palmeri, Mark L.
2015-01-01
Commercially-available shear wave imaging systems measure group shear wave speed (SWS) and often report stiffness parameters applying purely elastic material models. Soft tissues, however, are viscoelastic, and higher-order material models are necessary to characterize the dispersion associated with broadband shearwaves. In this paper, we describe a robust, model-based algorithm and use a linear dispersion model to perform shearwave dispersion analysis in traditionally “difficult-to-image” subjects. In a cohort of 135 Non-Alcoholic Fatty Liver Disease patients, we compare the performance of group SWS with dispersion analysis-derived phase velocity c(200 Hz) and dispersion slope dc/df parameters to stage hepatic fibrosis and steatosis. AUROC analysis demonstrates correlation between all parameters (group SWS, c(200 Hz), and, to a lesser extent dc/df) and fibrosis stage, while no correlation was observed between steatosis stage and any of the material parameters. Interestingly, optimal AUROC threshold SWS values separating advanced liver fibrosis (≥F3) from mild-to-moderate fibrosis (≤F2) were shown to be frequency dependent, and to increase from 1.8 to 3.3 m/s over the 0–400 Hz shearwave frequency range. PMID:25585400
Gervassi, Ana; Lejarcegui, Nicholas; Dross, Sandra; Jacobson, Amanda; Itaya, Grace; Kidzeru, Elvis; Gantt, Soren; Jaspan, Heather; Horton, Helen
2014-01-01
Over 4 million infants die each year from infections, many of which are vaccine-preventable. Young infants respond relatively poorly to many infections and vaccines, but the basis of reduced immunity in infants is ill defined. We sought to investigate whether myeloid-derived suppressor cells (MDSC) represent one potential impediment to protective immunity in early life, which may help inform strategies for effective vaccination prior to pathogen exposure. We enrolled healthy neonates and children in the first 2 years of life along with healthy adult controls to examine the frequency and function of MDSC, a cell population able to potently suppress T cell responses. We found that MDSC, which are rarely seen in healthy adults, are present in high numbers in neonates and their frequency rapidly decreases during the first months of life. We determined that these neonatal MDSC are of granulocytic origin (G-MDSC), and suppress both CD4+ and CD8+ T cell proliferative responses in a contact-dependent manner and gamma interferon production. Understanding the role G-MDSC play in infant immunity could improve vaccine responsiveness in newborns and reduce mortality due to early-life infections. PMID:25248150
Gervassi, Ana; Lejarcegui, Nicholas; Dross, Sandra; Jacobson, Amanda; Itaya, Grace; Kidzeru, Elvis; Gantt, Soren; Jaspan, Heather; Horton, Helen
2014-01-01
Over 4 million infants die each year from infections, many of which are vaccine-preventable. Young infants respond relatively poorly to many infections and vaccines, but the basis of reduced immunity in infants is ill defined. We sought to investigate whether myeloid-derived suppressor cells (MDSC) represent one potential impediment to protective immunity in early life, which may help inform strategies for effective vaccination prior to pathogen exposure. We enrolled healthy neonates and children in the first 2 years of life along with healthy adult controls to examine the frequency and function of MDSC, a cell population able to potently suppress T cell responses. We found that MDSC, which are rarely seen in healthy adults, are present in high numbers in neonates and their frequency rapidly decreases during the first months of life. We determined that these neonatal MDSC are of granulocytic origin (G-MDSC), and suppress both CD4+ and CD8+ T cell proliferative responses in a contact-dependent manner and gamma interferon production. Understanding the role G-MDSC play in infant immunity could improve vaccine responsiveness in newborns and reduce mortality due to early-life infections.
NASA Technical Reports Server (NTRS)
Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.
2009-01-01
A cloud frequency of occurrence matrix is generated using merged cloud vertical profile derived from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and Cloud Profiling Radar (CPR). The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical pro les can be related by a set of equations when the correlation distance of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches the random overlap with increasing distance separating cloud layers and that the probability of deviating from the random overlap decreases exponentially with distance. One month of CALIPSO and CloudSat data support these assumptions. However, the correlation distance sometimes becomes large, which might be an indication of precipitation. The cloud correlation distance is equivalent to the de-correlation distance introduced by Hogan and Illingworth [2000] when cloud fractions of both layers in a two-cloud layer system are the same.
Lanfer, A; Hebestreit, A; Ahrens, W; Krogh, V; Sieri, S; Lissner, L; Eiben, G; Siani, A; Huybrechts, I; Loit, H-M; Papoutsou, S; Kovács, E; Pala, V
2011-04-01
To investigate the reproducibility of food consumption frequencies derived from the food frequency section of the Children's Eating Habits Questionnaire (CEHQ-FFQ) that was developed and used in the IDEFICS (Identification and prevention of dietary- and lifestyle-induced health effects in children and infants) project to assess food habits in 2- to 9-year-old European children. From a subsample of 258 children who participated in the IDEFICS baseline examination, parental questionnaires of the CEHQ were collected twice to assess reproducibility of questionnaire results from 0 to 354 days after the first examination. Weighted Cohen's kappa coefficients (κ) and Spearman's correlation coefficients (r) were calculated to assess agreement between the first and second questionnaires for each food item of the CEHQ-FFQ. Stratification was performed for sex, age group, geographical region and length of period between the first and second administrations. Fisher's Z transformation was applied to test correlation coefficients for significant differences between strata. For all food items analysed, weighted Cohen's kappa coefficients (κ) and Spearman's correlation coefficients (r) were significant and positive (P<0.001). Reproducibility was lowest for diet soft drinks (κ=0.23, r=0.32) and highest for sweetened milk (κ=0.68, r=0.76). Correlation coefficients were comparable to those of previous studies on FFQ reproducibility in children and adults. Stratification did not reveal systematic differences in reproducibility by sex and age group. Spearman's correlation coefficients differed significantly between northern and southern European countries for 10 food items. In nine of them, the lower respective coefficient was still high enough to conclude acceptable reproducibility. As expected, longer time (>128 days) between the first and second administrations resulted in a generally lower, yet still acceptable, reproducibility. Results indicate that the CEHQ-FFQ gives
... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2009-02-20
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.
NASA Astrophysics Data System (ADS)
Morioka, T.; Kawanishi, S.; Saruwatari, M.
1994-05-01
Error-free, tunable optical frequency conversion of a transform-limited 4.0 ps optical pulse signalis demonstrated at 6.3 Gbit/s using four-wave mixing in a polarization-maintaining optical fibre. The process generates 4.0-4.6 ps pulses over a 25nm range with time-bandwidth products of 0.31-0.43 and conversion power penalties of less than 1.5 dB.
Error Patterns in Problem Solving.
ERIC Educational Resources Information Center
Babbitt, Beatrice C.
Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…
Kobayashi, Jyumpei; Tanabiki, Misaki; Doi, Shohei; Kondo, Akihiko; Ohshiro, Takashi; Suzuki, Hirokazu
2015-11-01
The plasmid pGKE75-catA138T, which comprises pUC18 and the catA138T gene encoding thermostable chloramphenicol acetyltransferase with an A138T amino acid replacement (CATA138T), serves as an Escherichia coli-Geobacillus kaustophilus shuttle plasmid that confers moderate chloramphenicol resistance on G. kaustophilus HTA426. The present study examined the thermoadaptation-directed mutagenesis of pGKE75-catA138T in an error-prone thermophile, generating the mutant plasmid pGKE75(αβ)-catA138T responsible for substantial chloramphenicol resistance at 65°C. pGKE75(αβ)-catA138T contained no mutation in the catA138T gene but had two mutations in the pUC replicon, even though the replicon has no apparent role in G. kaustophilus. Biochemical characterization suggested that the efficient chloramphenicol resistance conferred by pGKE75(αβ)-catA138T is attributable to increases in intracellular CATA138T and acetyl-coenzyme A following a decrease in incomplete forms of pGKE75(αβ)-catA138T. The decrease in incomplete plasmids may be due to optimization of plasmid replication by RNA species transcribed from the mutant pUC replicon, which were actually produced in G. kaustophilus. It is noteworthy that G. kaustophilus was transformed with pGKE75(αβ)-catA138T using chloramphenicol selection at 60°C. In addition, a pUC18 derivative with the two mutations propagated in E. coli at a high copy number independently of the culture temperature and high plasmid stability. Since these properties have not been observed in known plasmids, the outcomes extend the genetic toolboxes for G. kaustophilus and E. coli. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
ERIC Educational Resources Information Center
Matthews, Danielle E.; Theakston, Anna L.
2006-01-01
How do English-speaking children inflect nouns for plurality and verbs for the past tense? We assess theoretical answers to this question by considering errors of omission, which occur when children produce a stem in place of its inflected counterpart (e.g., saying "dress" to refer to 5 dresses). A total of 307 children (aged 3;11-9;9)…
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
NASA Astrophysics Data System (ADS)
Xie, Yi; Zhang, Shuang-Nan; Liao, Jin-Yuan
2015-07-01
We model the evolution of the spin frequency's second derivative v̈ and the braking index n of radio pulsars with simulations within the phenomenological model of their surface magnetic field evolution, which contains a long-term power-law decay modulated by short-term oscillations. For the pulsar PSR B0329+54, a model with three oscillation components can reproduce its v̈ variation. We show that the “averaged” n is different from the instantaneous n, and its oscillation magnitude decreases abruptly as the time span increases, due to the “averaging” effect. The simulated timing residuals agree with the main features of the reported data. Our model predicts that the averaged v̈ of PSR B0329+54 will start to decrease rapidly with newer data beyond those used in Hobbs et al. We further perform Monte Carlo simulations for the distribution of the reported data in |v̈| and |n| versus characteristic age τC diagrams. It is found that the magnetic field oscillation model with decay index α = 0 can reproduce the distributions quite well. Compared with magnetic field decay due to the ambipolar diffusion (α = 0.5) and the Hall cascade (α = 1.0), the model with no long term decay (α = 0) is clearly preferred for old pulsars by the p-values of the two-dimensional Kolmogorov-Smirnov test. Supported by the National Natural Science Foundation of China.
Torsional Vibration of Machines with Gear Errors
NASA Astrophysics Data System (ADS)
Lees, A. W.; Friswell, M. I.; Litak, G.
2011-07-01
Vibration and noise induced by errors and faults in gear meshes are key concerns for the performance of many rotating machines and the prediction of developing faults. Of particular concern are displacement errors in the gear mesh and for rigid gears these may be modelled to give a linear set of differential equations with forced excitation. Other faults, such as backlash or friction, may also arise and give non-linear models with rich dynamics. This paper considers the particular case of gear errors modelled as a Fourier series based on the tooth meshing frequency, leading immediately to non-linear equations of motion, even without the presence of other non-linear phenomena. By considering the perturbed response this system may be modelled as a parametrically excited system. This paper motivates the analysis, derives the equations of motion for the case of a single gear mesh, and provides example response simulations of a boiler feed pump including phase portraits and power spectra.
... address broader product safety issues. FDA Drug Safety Communications for Drug Products Associated with Medication Errors FDA Drug Safety Communication: FDA approves brand name change for antidepressant drug ...
Error detection in anatomic pathology.
Zarbo, Richard J; Meier, Frederick A; Raab, Stephen S
2005-10-01
To define the magnitude of error occurring in anatomic pathology, to propose a scheme to classify such errors so their influence on clinical outcomes can be evaluated, and to identify quality assurance procedures able to reduce the frequency of errors. (a) Peer-reviewed literature search via PubMed for studies from single institutions and multi-institutional College of American Pathologists Q-Probes studies of anatomic pathology error detection and prevention practices; (b) structured evaluation of defects in surgical pathology reports uncovered in the Department of Pathology and Laboratory Medicine of the Henry Ford Health System in 2001-2003, using a newly validated error taxonomy scheme; and (c) comparative review of anatomic pathology quality assurance procedures proposed to reduce error. Marked differences in both definitions of error and pathology practice make comparison of error detection and prevention procedures among publications from individual institutions impossible. Q-Probes studies further suggest that observer redundancy reduces diagnostic variation and interpretive error, which ranges from 1.2 to 50 errors per 1000 cases; however, it is unclear which forms of such redundancy are the most efficient in uncovering diagnostic error. The proposed error taxonomy tested has shown a very good interobserver agreement of 91.4% (kappa = 0.8780; 95% confidence limit, 0.8416-0.9144), when applied to amended reports, and suggests a distribution of errors among identification, specimen, interpretation, and reporting variables. Presently, there are no standardized tools for defining error in anatomic pathology, so it cannot be reliably measured nor can its clinical impact be assessed. The authors propose a standardized error classification that would permit measurement of error frequencies, clinical impact of errors, and the effect of error reduction and prevention efforts. In particular, the value of double-reading, case conferences, and consultations (the
From Monroe to Moreau: an analysis of face naming errors.
Brédart, S; Valentine, T
1992-12-01
Functional models of face recognition and speech production have developed separately. However, naming a familiar face is, of course, an act of speech production. In this paper we propose a revision of Bruce and Young's (1986) model of face processing, which incorporates two features of Levelt's (1989) model of speech production. In particular, the proposed model includes two stages of lexical access for names and monitoring of face naming based on a "perceptual loop". Two predictions were derived from the perceptual loop hypothesis of speech monitoring: (1) naming errors in which a (correct) rare surname is erroneously replaced by a common surname should occur more frequently than the reverse substitution (the error asymmetry effect); (2) naming errors in which a common surname is articulated are more likely to be repaired than errors which result in articulation of a rare surname (the error-repairing effect). Both predictions were supported by an analysis of face naming errors in a laboratory face naming task. In a further experiment we considered the possibility that the effects of surname frequency observed in face naming errors could be explained by the frequency sensitivity of lexical access in speech production. However, no effect of the frequency of the surname of the faces used in the previous experiment was found on face naming latencies. Therefore, it is concluded that the perceptual loop hypothesis provides the more parsimonious account of the entire pattern of the results.
Financial errors in dementia: testing a neuroeconomic conceptual framework.
Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L; Rosen, Howard J
2014-08-01
Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer's disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p < 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p < 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention.
Financial errors in dementia: Testing a neuroeconomic conceptual framework
Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.
2013-01-01
Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.
Language comprehension errors: A further investigation
NASA Astrophysics Data System (ADS)
Clarkson, Philip C.
1991-06-01
Comprehension errors made when attempting mathematical word problems have been noted as one of the high frequency categories in error analysis. This error category has been assumed to be language based. The study reported here provides some support for the linkage of comprehension errors to measures of language competency. Further, there is evidence that the frequency of such errors is related to competency in both the mother tongue and the language of instruction for bilingual students.
Ansell, Juliet; Butts, Christine A; Paturi, Gunaranjan; Eady, Sarah L; Wallace, Alison J; Hedderley, Duncan; Gearry, Richard B
2015-05-01
The worldwide growth in the incidence of gastrointestinal disorders has created an immediate need to identify safe and effective interventions. In this randomized, double-blind, placebo-controlled study, we examined the effects of Actazin and Gold, kiwifruit-derived nutritional ingredients, on stool frequency, stool form, and gastrointestinal comfort in healthy and functionally constipated (Rome III criteria for C3 functional constipation) individuals. Using a crossover design, all participants consumed all 4 dietary interventions (Placebo, Actazin low dose [Actazin-L] [600 mg/day], Actazin high dose [Actazin-H] [2400 mg/day], and Gold [2400 mg/day]). Each intervention was taken for 28 days followed by a 14-day washout period between interventions. Participants recorded their daily bowel movements and well-being parameters in daily questionnaires. In the healthy cohort (n = 19), the Actazin-H (P = .014) and Gold (P = .009) interventions significantly increased the mean daily bowel movements compared with the washout. No significant differences were observed in stool form as determined by use of the Bristol stool scale. In a subgroup analysis of responders in the healthy cohort, Actazin-L (P = .005), Actazin-H (P < .001), and Gold (P = .001) consumption significantly increased the number of daily bowel movements by greater than 1 bowel movement per week. In the functionally constipated cohort (n = 9), there were no significant differences between interventions for bowel movements and the Bristol stool scale values or in the subsequent subgroup analysis of responders. This study demonstrated that Actazin and Gold produced clinically meaningful increases in bowel movements in healthy individuals.
NASA Astrophysics Data System (ADS)
Shi, Y. C.; Parker, D. L.; Dillon, C. R.
2016-08-01
This study evaluates the sensitivity of two magnetic resonance-guided focused ultrasound (MRgFUS) thermal property estimation methods to errors in required inputs and different data inclusion criteria. Using ex vivo pork muscle MRgFUS data, sensitivities to required inputs are determined by introducing errors to ultrasound beam locations (r error = -2 to 2 mm) and time vectors (t error = -2.2 to 2.2 s). In addition, the sensitivity to user-defined data inclusion criteria is evaluated by choosing different spatial (r fit = 1-10 mm) and temporal (t fit = 8.8-61.6 s) regions for fitting. Beam location errors resulted in up to 50% change in property estimates with local minima occurring at r error = 0 and estimate errors less than 10% when r error < 0.5 mm. Errors in the time vector led to property estimate errors up to 40% and without local minimum, indicating the need to trigger ultrasound sonications with the MR image acquisition. Regarding the selection of data inclusion criteria, property estimates reached stable values (less than 5% change) when r fit > 2.5 × FWHM, and were most accurate with the least variability for longer t fit. Guidelines provided by this study highlight the importance of identifying required inputs and choosing appropriate data inclusion criteria for robust and accurate thermal property estimation. Applying these guidelines will prevent the introduction of biases and avoidable errors when utilizing these property estimation techniques for MRgFUS thermal modeling applications.
Costa, M S; Ardais, A P; Fioreze, G T; Mioranzza, S; Botton, P H S; Souza, D O; Rocha, J B T; Porciúncula, L O
2012-10-11
The participation of the brain-derived neurotrophic factor (BDNF) in the benefits of physical exercise on cognitive functions has been widely investigated. Different from voluntary exercise, the effects of treadmill running on memory and BDNF are still controversial. Importantly, the impact of the frequency of physical exercise on memory remains still unknown. In this study, young adult and middle-aged rats were submitted to 8 weeks of treadmill running at moderate intensity and divided into 4 groups of frequency: 0, 1, 3 and 7 days/week. Aversive and recognition memory were assessed as well as the immunocontent of proBDNF, BDNF and tyrosine kinase receptor type B (TrkB) in the hippocampus. Frequencies did not modify memory in young adult animals. The frequency of 1 day/week increased proBDNF and BDNF. All frequencies decreased TrkB immunocontent. Middle-aged animals presented memory impairment along with increased BDNF and downregulation of TrkB receptor. The frequency of 1day/week reversed age-related recognition memory impairment, but worsened the performance in the inhibitory avoidance task. The other frequencies rescued aversive memory, but not recognition memory. None of frequencies altered the age-related increase in the BDNF. Seven days/week decreased proBDNF and there was a trend toward increase in the TrkB by the frequency of 1 day/week. These results support that the frequency and intensity of exercise have a profound impact on cognitive functions mainly in elderly. Thus, the effects of physical exercise on behavior and brain functions should take into account the frequency and intensity.
Diagnostic errors in pediatric radiology.
Taylor, George A; Voss, Stephan D; Melvin, Patrice R; Graham, Dionne A
2011-03-01
Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement.
Thermodynamics of Error Correction
NASA Astrophysics Data System (ADS)
Sartori, Pablo; Pigolotti, Simone
2015-10-01
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
USDA-ARS?s Scientific Manuscript database
The MTDFREML (Boldman et al., 1995) set of programs was written to handle partially missing data in an expedient manner. When estimating (co)variance components and genetic parameters for multiple trait models, the programs have not been able to estimate standard errors of those estimates for multi...
Marycz, Krzysztof; Lewandowski, Daniel; Tomaszewski, Krzysztof A; Henry, Brandon M; Golec, Edward B; Marędziak, Monika
2016-01-01
The aim of this study was to evaluate if low-frequency, low-magnitude vibrations (LFLM) could enhance chondrogenic differentiation potential of human adipose derived mesenchymal stem cells (hASCs) with simultaneous inhibition of their adipogenic properties for biomedical purposes. We developed a prototype device that induces low-magnitude (0.3 g) low-frequency vibrations with the following frequencies: 25, 35 and 45 Hz. Afterwards, we used human adipose derived mesenchymal stem cell (hASCS), to investigate their cellular response to the mechanical signals. We have also evaluated hASCs morphological and proliferative activity changes in response to each frequency. Induction of chondrogenesis in hASCs, under the influence of a 35 Hz signal leads to most effective and stable cartilaginous tissue formation through highest secretion of Bone Morphogenetic Protein 2 (BMP-2), and Collagen type II, with low concentration of Collagen type I. These results correlated well with appropriate gene expression level. Simultaneously, we observed significant up-regulation of α3, α4, β1 and β3 integrins in chondroblast progenitor cells treated with 35 Hz vibrations, as well as Sox-9. Interestingly, we noticed that application of 35 Hz frequencies significantly inhibited adipogenesis of hASCs. The obtained results suggest that application of LFLM vibrations together with stem cell therapy might be a promising tool in cartilage regeneration.
Lewandowski, Daniel; Tomaszewski, Krzysztof A.; Henry, Brandon M.; Golec, Edward B.; Marędziak, Monika
2016-01-01
The aim of this study was to evaluate if low-frequency, low-magnitude vibrations (LFLM) could enhance chondrogenic differentiation potential of human adipose derived mesenchymal stem cells (hASCs) with simultaneous inhibition of their adipogenic properties for biomedical purposes. We developed a prototype device that induces low-magnitude (0.3 g) low-frequency vibrations with the following frequencies: 25, 35 and 45 Hz. Afterwards, we used human adipose derived mesenchymal stem cell (hASCS), to investigate their cellular response to the mechanical signals. We have also evaluated hASCs morphological and proliferative activity changes in response to each frequency. Induction of chondrogenesis in hASCs, under the influence of a 35 Hz signal leads to most effective and stable cartilaginous tissue formation through highest secretion of Bone Morphogenetic Protein 2 (BMP-2), and Collagen type II, with low concentration of Collagen type I. These results correlated well with appropriate gene expression level. Simultaneously, we observed significant up-regulation of α3, α4, β1 and β3 integrins in chondroblast progenitor cells treated with 35 Hz vibrations, as well as Sox-9. Interestingly, we noticed that application of 35 Hz frequencies significantly inhibited adipogenesis of hASCs. The obtained results suggest that application of LFLM vibrations together with stem cell therapy might be a promising tool in cartilage regeneration. PMID:26966645
Correcting numerical integration errors caused by small aliasing errors
Smallwood, D.O.
1997-11-01
Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.
NASA Astrophysics Data System (ADS)
Kunimura, Shinsuke; Ohmori, Hitoshi
We present a rapid process for producing flat and smooth surfaces. In this technical note, a fabrication result of a carbon mirror is shown. Electrolytic in-process dressing (ELID) grinding with a metal bonded abrasive wheel, then a metal-resin bonded abrasive wheel, followed by a conductive rubber bonded abrasive wheel, and finally magnetorheological finishing (MRF) were performed as the first, second, third, and final steps, respectively in this process. Flatness over the whole surface was improved by performing the first and second steps. After the third step, peak to valley (PV) and root mean square (rms) values in an area of 0.72 x 0.54 mm2 on the surface were improved. These values were further improved after the final step, and a PV value of 10 nm and an rms value of 1 nm were obtained. Form errors and small surface irregularities such as surface waviness and micro roughness were efficiently reduced by performing ELID grinding using the above three kinds of abrasive wheels because of the high removal rate of ELID grinding, and residual small irregularities were reduced by short time MRF. This process makes it possible to produce flat and smooth surfaces in several hours.
Papadaniil, Chrysa D; Kosmidou, Vasiliki E; Tsolaki, Anthoula; Tsolaki, Magda; Kompatsiaris, Ioannis Yiannis; Hadjileontiadis, Leontios J
2015-01-01
Recent evidence suggests that cross-frequency coupling (CFC) plays an essential role in multi-scale communication across the brain. The amplitude of the high frequency oscillations, responsible for local activity, is modulated by the phase of the lower frequency activity, in a task and region-relevant way. In this paper, we examine this phase-amplitude coupling in a two-tone oddball paradigm for the low frequency bands (delta, theta, alpha, and beta) and determine the most prominent CFCs. Data consisted of cortical time series, extracted by applying three-dimensional vector field tomography (3D-VFT) to high density (256 channels) electroencephalography (HD-EEG), and CFC analysis was based on the phase-amplitude coupling metric, namely PAC. Our findings suggest CFC spanning across all brain regions and low frequencies. Stronger coupling was observed in the delta band, that is closely linked to sensory processing. However, theta coupling was reinforced in the target tone response, revealing a task-dependent CFC and its role in brain networks communication.
Automatic Locking of Laser Frequency to an Absorption Peak
NASA Technical Reports Server (NTRS)
Koch, Grady J.
2006-01-01
An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that
NASA Astrophysics Data System (ADS)
Wentz, Frank J.; Meissner, Thomas
2016-05-01
The Liebe and Rosenkranz atmospheric absorption models for dry air and water vapor below 100 GHz are refined based on an analysis of antenna temperature (TA) measurements taken by the Global Precipitation Measurement Microwave Imager (GMI) in the frequency range 10.7 to 89.0 GHz. The GMI TA measurements are compared to the TA predicted by a radiative transfer model (RTM), which incorporates both the atmospheric absorption model and a model for the emission and reflection from a rough-ocean surface. The inputs for the RTM are the geophysical retrievals of wind speed, columnar water vapor, and columnar cloud liquid water obtained from the satellite radiometer WindSat. The Liebe and Rosenkranz absorption models are adjusted to achieve consistency with the RTM. The vapor continuum is decreased by 3% to 10%, depending on vapor. To accomplish this, the foreign-broadening part is increased by 10%, and the self-broadening part is decreased by about 40% at the higher frequencies. In addition, the strength of the water vapor line is increased by 1%, and the shape of the line at low frequencies is modified. The dry air absorption is increased, with the increase being a maximum of 20% at the 89 GHz, the highest frequency considered here. The nonresonant oxygen absorption is increased by about 6%. In addition to the RTM comparisons, our results are supported by a comparison between columnar water vapor retrievals from 12 satellite microwave radiometers and GPS-retrieved water vapor values.
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
NASA Astrophysics Data System (ADS)
DeSalvo, Riccardo
2015-06-01
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.
Error analysis of the chirp-z transform when implemented using waveform synthesizers and FFTs
Bielek, T.P.
1990-11-01
This report analyzes the effects of finite-precision arithmetic on discrete Fourier transforms (DFTs) calculated using the chirp-z transform algorithm. An introduction to the chirp-z transform is given together with a description of how the chirp-z transform is implemented in hardware. Equations for the effects of chirp rate errors, starting frequency errors, and starting phase errors on the frequency spectrum of the chirp-z transform are derived. Finally, the maximum possible errors in the chirp rate, the starting frequencies, and starting phases are calculated and used to compute the worst case effects on the amplitude and phase spectrums of the chirp-z transform. 1 ref., 6 figs.
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to
Retransmission error control with memory
NASA Technical Reports Server (NTRS)
Sindhu, P. S.
1977-01-01
In this paper, an error control technique that is a basic improvement over automatic-repeat-request ARQ is presented. Erroneously received blocks in an ARQ system are used for error control. The technique is termed ARQ-with-memory (MRQ). The general MRQ system is described, and simple upper and lower bounds are derived on the throughput achievable by MRQ. The performance of MRQ with respect to throughput, message delay and probability of error is compared to that of ARQ by simulating both systems using error data from a VHF satellite channel being operated in the ALOHA packet broadcasting mode.
Maidhof, Clemens
2013-01-01
To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255
Prè, D; Ceccarelli, G; Gastaldi, G; Asti, A; Saino, E; Visai, L; Benazzo, F; Cusella De Angelis, M G; Magenes, G
2011-08-01
Several studies have demonstrated that tissue culture conditions influence the differentiation of human adipose-derived stem cells (hASCs). Recently, studies performed on SAOS-2 and bone marrow stromal cells (BMSCs) have shown the effectiveness of high frequency vibration treatment on cell differentiation to osteoblasts. The aim of this study was to evaluate the effects of low amplitude, high frequency vibrations on the differentiation of hASCs toward bone tissue. In view of this goal, hASCs were cultured in proliferative or osteogenic media and stimulated daily at 30Hz for 45min for 28days. The state of calcification of the extracellular matrix was determined using the alizarin assay, while the expression of extracellular matrix and associated mRNA was determined by ELISA assays and quantitative RT-PCR (qRT-PCR). The results showed the osteogenic effect of high frequency vibration treatment in the early stages of hASC differentiation (after 14 and 21days). On the contrary, no additional significant differences were observed after 28days cell culture. Transmission Electron Microscopy (TEM) images performed on 21day samples showed evidence of structured collagen fibers in the treated samples. All together, these results demonstrate the effectiveness of high frequency vibration treatment on hASC differentiation toward osteoblasts.
NASA Technical Reports Server (NTRS)
Couvillon, L. A., Jr. (Inventor)
1968-01-01
A digital communicating system for automatically synchronizing signals for data detection is described. The systems consists of biphase modulating a subcarrier frequency by the binary data and transmitting a carrier phase modulated by this signal to a receiver, where coherent phase detection is employed to recover the subcarrier. Data detection is achieved by providing, in the receiver, a demodulated reference which is in synchronism with the unmodulated subcarrier in transmitting system. The output of the detector is passed through a matched filter where the signal is integrated over a bit period. As a result, random noise components are averaged out, so that the probability of detecting the correct data transmitted is maximized.
Batic, D.; Kelkar, N. G.; Nowakowski, M.
2011-05-15
It is shown here that the extraction of quasinormal modes within the first Born approximation of the scattering amplitude is mathematically not well-founded. Indeed, the constraints on the existence of the scattering amplitude integral lead to inequalities for the imaginary parts of the quasinormal mode frequencies. For instance, in the Schwarzschild case, 0{<=}{omega}{sub I}<{kappa} (where {kappa} is the surface gravity at the horizon) invalidates the poles deduced from the first Born approximation method, namely, {omega}{sub n}=in{kappa}.
Automatic oscillator frequency control system
NASA Technical Reports Server (NTRS)
Smith, S. F. (Inventor)
1985-01-01
A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.
Standard Errors of the Kernel Equating Methods under the Common-Item Design.
ERIC Educational Resources Information Center
Liou, Michelle; Cheng, Philip E.; Johnson, Eugene G.
1997-01-01
Derived simplified equations to compute the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function. Results from two empirical studies indicate that these equations work reasonably well for moderate size samples. (SLD)
Standard Errors of the Kernel Equating Methods under the Common-Item Design.
ERIC Educational Resources Information Center
Liou, Michelle; And Others
This research derives simplified formulas for computing the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function (P. W. Holland, B. F. King, and D. T. Thayer, 1989; Holland and Thayer, 1987). The simplified formulas are applicable to equating both the…
Nguyen, C.; Garbet, X.; Smolyakov, A. I.
2008-11-15
In the present paper, we compare two modes with frequencies belonging to the acoustic frequency range: the geodesic acoustic mode (GAM) and the Beta Alfven eigenmode (BAE). For this, a variational gyrokinetic energy principle coupled to a Fourier sidebands expansion is developed. High order finite Larmor radius and finite orbit width effects are kept. Their impact on the mode structures and on the Alfven spectrum is calculated and discussed. We show that in a local analysis, the degeneracy of the electrostatic GAM and the BAE dispersion relations is verified to a high order and based in particular on a local poloidal symmetry of the two modes. When a more global point of view is taken, and the full radial structures of the modes are computed, differences appear. The BAE structure is shown to have an enforced localization, and to possibly connect to a large magnetohydrodynamic structure. On the contrary, the GAM is seen to have a wavelike, nonlocalized structure, as long as standard slowly varying monotonic profiles are considered.
Feshin, V.P.; Nikitin, P.A.; Voronkov, M.G.
1985-09-01
The method of nuclear quadrupole resonance (NQR) provides unique information on the spatial distribution of the electron density of the test atom. This paper attempts to confirm the theoretically established relationship between Cl 35 NQR frequencies and the features of the distribution of the electron density of the Cl atom in its compounds by another method. The authors carried out quantum-mechanical calculations in the CNDO/2 approximation in an all-valence sp basis for a number of organic, organometallic, and inorganic chlorinated derivatives. A comparison of the linear plots corresponding to the correlation equations and the empirical expressions for the molecules of the XC1 series are shown.
NASA Astrophysics Data System (ADS)
Conway, J. E.; Sault, R. J.
Introduction; Image Fidelity; Multi-Frequency Synthesis; Spectral Effects; The Spectral Expansion; Spectral Dirty Beams; First Order Spectral Errors; Second Order Spectral Errors; The MFS Deconvolution Problem; Nature of The Problem; Map and Stack; Direct Assault; Data Weighting Methods; Double Deconvolution; The Sault Algorithm; Multi-Frequency Self-Calibration; Practical MFS; Conclusions
Phase Errors and the Capture Effect
Blair, J., and Machorro, E.
2011-11-01
This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.
Lu, V B; Colmers, W F; Smith, P A
2009-07-21
Chronic constriction injury (CCI) of rat sciatic nerve produces a specific pattern of electrophysiological changes in the superficial dorsal horn that lead to central sensitization that is associated with neuropathic pain. These changes can be recapitulated in spinal cord organotypic cultures by long term (5-6 days) exposure to brain-derived neurotrophic factor (BDNF) (200 ng/ml). Certain lines of evidence suggest that both CCI and BDNF increase excitatory synaptic drive to putative excitatory neurons while reducing that to putative inhibitory interneurons. Because BDNF slows the rate of discharge of synaptically-driven action potentials in inhibitory neurons, it should also decrease the frequency of spontaneous inhibitory postsynaptic currents (sIPSCs) throughout the superficial dorsal horn. To test this possibility, we characterized superficial dorsal horn neurons in organotypic cultures according to five electrophysiological phenotypes that included tonic, delay and irregular firing neurons. Five to 6 days of treatment with 200 ng/ml BDNF decreased sIPSC frequency in tonic and irregular neurons as might be expected if BDNF selectively decreases excitatory synaptic drive to inhibitory interneurons. The frequency of sIPSCs in delay neurons was however increased. Further analysis of the action of BDNF on tetrodotoxin-resistant miniature inhibitory postsynaptic currents (mIPSC) showed that the frequency was increased in delay neurons, unchanged in tonic neurons and decreased in irregular neurons. BDNF may thus reduce action potential frequency in those inhibitory interneurons that project to tonic and irregular neurons but not in those that project to delay neurons.
Radar error statistics for the space shuttle
NASA Technical Reports Server (NTRS)
Lear, W. M.
1979-01-01
Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.
Error studies for SNS Linac. Part 1: Transverse errors
Crandall, K.R.
1998-12-31
The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).
Ahlgren, Nathan A; Ren, Jie; Lu, Yang Young; Fuhrman, Jed A; Sun, Fengzhu
2017-01-09
Viruses and their host genomes often share similar oligonucleotide frequency (ONF) patterns, which can be used to predict the host of a given virus by finding the host with the greatest ONF similarity. We comprehensively compared 11 ONF metrics using several k-mer lengths for predicting host taxonomy from among ∼32 000 prokaryotic genomes for 1427 virus isolate genomes whose true hosts are known. The background-subtracting measure [Formula: see text] at k = 6 gave the highest host prediction accuracy (33%, genus level) with reasonable computational times. Requiring a maximum dissimilarity score for making predictions (thresholding) and taking the consensus of the 30 most similar hosts further improved accuracy. Using a previous dataset of 820 bacteriophage and 2699 bacterial genomes, [Formula: see text] host prediction accuracies with thresholding and consensus methods (genus-level: 64%) exceeded previous Euclidian distance ONF (32%) or homology-based (22-62%) methods. When applied to metagenomically-assembled marine SUP05 viruses and the human gut virus crAssphage, [Formula: see text]-based predictions overlapped (i.e. some same, some different) with the previously inferred hosts of these viruses. The extent of overlap improved when only using host genomes or metagenomic contigs from the same habitat or samples as the query viruses. The [Formula: see text] ONF method will greatly improve the characterization of novel, metagenomic viruses. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Interpolation Errors in Spectrum Analyzers
NASA Technical Reports Server (NTRS)
Martin, J. L.
1996-01-01
To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.
Prescription errors in cancer chemotherapy: Omissions supersede potentially harmful errors
Mathaiyan, Jayanthi; Jain, Tanvi; Dubashi, Biswajit; Reddy, K Satyanarayana; Batmanabane, Gitanjali
2015-01-01
Objective: To estimate the frequency and type of prescription errors in patients receiving cancer chemotherapy. Settings and Design: We conducted a cross-sectional study at the day care unit of the Regional Cancer Centre (RCC) of a tertiary care hospital in South India. Materials and Methods: All prescriptions written during July to September 2013 for patients attending the out-patient department of the RCC to be treated at the day care center were included in this study. The prescriptions were analyzed for omission of standard information, usage of brand names, abbreviations and legibility. The errors were further classified into potentially harmful ones and not harmful based on the likelihood of resulting in harm to the patient. Descriptive analysis was performed to estimate the frequency of prescription errors and expressed as total number of errors and percentage. Results: A total of 4253 prescribing errors were found in 1500 prescriptions (283.5%), of which 47.1% were due to omissions like name, age and diagnosis and 22.5% were due to usage of brand names. Abbreviations of pre-medications and anticancer drugs accounted for 29.2% of the errors. Potentially harmful errors that were likely to result in serious consequences to the patient were estimated to be 11.7%. Conclusions: Most of the errors intercepted in our study are due to a high patient load and inattention of the prescribers to omissions in prescription. Redesigning prescription forms and sensitizing prescribers to the importance of writing prescriptions without errors may help in reducing errors to a large extent. PMID:25969654
Pico de Coaña, Yago; Poschke, Isabel; Gentilcore, Giusy; Mao, Yumeng; Nyström, Maria; Hansson, Johan; Masucci, Giuseppe V; Kiessling, Rolf
2013-09-01
Blocking the immune checkpoint molecule CTL antigen-4 (CTLA-4) with ipilimumab has proven to induce long-lasting clinical responses in patients with metastatic melanoma. To study the early response that takes place after CTLA-4 blockade, peripheral blood immune monitoring was conducted in five patients undergoing ipilimumab treatment at baseline, three and nine weeks after administration of the first dose. Along with T-cell population analysis, this work was primarily focused on an in-depth study of the myeloid-derived suppressor cell (MDSC) populations. Ipilimumab treatment resulted in lower frequencies of regulatory T cells along with reduced expression levels of PD-1 at the nine-week time point. Three weeks after the initial ipilimumab dose, the frequency of granulocytic MDSCs was significantly reduced and was followed by a reduction in the frequency of arginase1-producing CD3(-) cells, indicating an indirect in trans effect that should be taken into account for future evaluations of ipilimumab mechanisms of action.
Visual field test simulation and error in threshold estimation.
Spenceley, S E; Henson, D B
1996-01-01
AIM: To establish, via computer simulation, the effects of patient response variability and staircase starting level upon the accuracy and repeatability of static full threshold visual field tests. METHOD: Patient response variability, defined by the standard deviation of the frequency of seeing versus stimulus intensity curve, is varied from 0.5 to 20 dB (in steps of 0.5 dB) with staircase starting levels ranging from 30 dB below to 30 dB above the patient's threshold (in steps of 10 dB). Fifty two threshold estimates are derived for each condition and the error of each estimate calculated (difference between the true threshold and the threshold estimate derived from the staircase procedure). The mean and standard deviation of the errors are then determined for each condition. The results from a simulated quadrantic defect (response variability set to typical values for a patient with glaucoma) are presented using two different algorithms. The first corresponds with that normally used when performing a full threshold examination while the second uses results from an earlier simulated full threshold examination for the staircase starting values. RESULTS: The mean error in threshold estimates was found to be biased towards the staircase starting level. The extent of the bias was dependent upon patient response variability. The standard deviation of the error increased both with response variability and staircase starting level. With the routinely used full threshold strategy the quadrantic defect was found to have a large mean error in estimated threshold values and an increase in the standard deviation of the error along the edge of the defect. When results from an earlier full threshold test are used as staircase starting values this error and increased standard deviation largely disappeared. CONCLUSION: The staircase procedure widely used in threshold perimetry increased the error and the variability of threshold estimates along the edges of defects. Using
Khine, Soe Minn; Houra, Tomoya; Tagawa, Masato
2013-04-01
In temperature measurement of non-isothermal fluid flows by a contact-type temperature sensor, heat conduction along the sensor body can cause significant measurement error which is called "heat-conduction error." The conventional formula for estimating the heat-conduction error was derived under the condition that the fluid temperature to be measured is uniform. Thus, if we apply the conventional formula to a thermal field with temperature gradient, the heat-conduction error will be underestimated. In the present study, we have newly introduced a universal physical model of a temperature-measurement system to estimate accurately the heat-conduction error even if a temperature gradient exists in non-isothermal fluid flows. Accordingly, we have been able to successfully derive a widely applicable estimation and/or evaluation formula of the heat-conduction error. Then, we have verified experimentally the effectiveness of the proposed formula using the two non-isothermal fields-a wake flow formed behind a heated cylinder and a candle flame-whose fluid-dynamical characteristics should be quite different. As a result, it is confirmed that the proposed formula can represent accurately the experimental behaviors of the heat-conduction error which cannot be explained appropriately by the existing formula. In addition, we have analyzed theoretically the effects of the heat-conduction error on the fluctuating temperature measurement of a non-isothermal unsteady fluid flow to derive the frequency response of the temperature sensor to be used. The analysis result shows that the heat-conduction error in temperature-fluctuation measurement appears only in a low-frequency range. Therefore, if the power-spectrum distribution of temperature fluctuations to be measured is sufficiently away from the low-frequency range, the heat-conduction error has virtually no effect on the temperature-fluctuation measurements even by the temperature sensor accompanying the heat-conduction error in
A Robust Sampling Frequency Offset Estimator for WLAN-OFDM
NASA Astrophysics Data System (ADS)
You, Young-Hwan; Hwang, Taewon; Jeong, Kwang-Soo; Yi, Jae-Hoon
This letter presents a noise-robust sampling frequency offset (SFO) estimation scheme for OFDM-based WLAN systems. Mean square error of the proposed estimation scheme is derived and simulation results are provided to verify our analysis. The proposed SFO estimator has an improved performance over the existing schemes with a reduction of the estimation range.
Error analysis of tissue resistivity measurement.
Tsai, Jang-Zern; Will, James A; Hubbard-Van Stelle, Scott; Cao, Hong; Tungjitkusolmun, Supan; Choy, Young Bin; Haemmerich, Dieter; Vorperian, Vicken R; Webster, John G
2002-05-01
We identified the error sources in a system for measuring tissue resistivity at eight frequencies from 1 Hz to 1 MHz using the four-terminal method. We expressed the measured resistivity with an analytical formula containing all error terms. We conducted practical error measurements with in-vivo and bench-top experiments. We averaged errors at all frequencies for all measurements. The standard deviations of error of the quantization error of the 8-bit digital oscilloscope with voltage averaging, the nonideality of the circuit, the in-vivo motion artifact and electrical interference combined to yield an error of +/- 1.19%. The dimension error in measuring the syringe tube for measuring the reference saline resistivity added +/- 1.32% error. The estimation of the working probe constant by interpolating a set of probe constants measured in reference saline solutions added +/- 0.48% error. The difference in the current magnitudes used during the probe calibration and that during the tissue resistivity measurement caused +/- 0.14% error. Variation of the electrode spacing, alignment, and electrode surface property due to the insertion of electrodes into the tissue caused +/- 0.61% error. We combined the above errors to yield an overall standard deviation error of the measured tissue resistivity of +/- 1.96%.
Prejac, J; Višnjević, V; Drmić, S; Skalny, A A; Mimica, N; Momčilović, B
2014-04-01
Today, human iodine deficiency is next to iron the most common nutritional deficiency in developed European and underdeveloped third world countries, respectively. A current biological indicator of iodine status is urinary iodine that reflects the very recent iodine exposure, whereas some long term indicator of iodine status remains to be identified. We analyzed hair iodine in a prospective, observational, cross-sectional, and exploratory study involving 870 apparently healthy Croatians (270 men and 600 women). Hair iodine was analyzed with the inductively coupled plasma mass spectrometry (ICP MS). Population (n870) hair iodine (IH) respective median was 0.499μgg(-1) (0.482 and 0.508μgg(-1)) for men and women, respectively, suggesting no sex related difference. We studied the hair iodine uptake by the logistic sigmoid saturation curve of the median derivatives to assess iodine deficiency, adequacy and excess. We estimated the overt iodine deficiency to occur when hair iodine concentration is below 0.15μgg(-1). Then there was a saturation range interval of about 0.15-2.0μgg(-1) (r(2)=0.994). Eventually, the sigmoid curve became saturated at about 2.0μgg(-1) and upward, suggesting excessive iodine exposure. Hair appears to be a valuable and robust long term biological indicator tissue for assessing the iodine body status. We propose adequate iodine status to correspond with the hair iodine (IH) uptake saturation of 0.565-0.739μgg(-1) (55-65%).
Measurement error models with interactions.
Midthune, Douglas; Carroll, Raymond J; Freedman, Laurence S; Kipnis, Victor
2016-04-01
An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (WW ) is a linear function of the unobserved true covariate (X) plus other covariates (Z) in the regression model. In this paper, we consider models for W that include interactions between X and Z. We derive the conditional distribution of X given W and Z and use it to extend the method of regression calibration to this class of measurement error models. We apply the model to dietary data and test whether self-reported dietary intake includes an interaction between true intake and body mass index. We also perform simulations to compare the model to simpler approximate calibration models. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Raucci, Frank J; Parra, David A; Christensen, Jason T; Hernandez, Lazaro E; Markham, Larry W; Xu, Meng; Slaughter, James C; Soslow, Jonathan H
2017-08-02
Extracellular volume fraction (ECV) is altered in pathological cardiac remodeling and predicts death and arrhythmia. ECV can be quantified using cardiovascular magnetic resonance (CMR) T1 mapping but calculation requires a measured hematocrit (Hct). The longitudinal relaxation of blood has been used in adults to generate a synthetic Hct (estimate of true Hct) but has not been validated in pediatric populations. One hundred fourteen children and young adults underwent a total of 163 CMRs with T1 mapping. The majority of subjects had a measured Hct the same day (N = 146). Native and post-contrast T1 were determined in blood pool, septum, and free wall of mid-LV, avoiding areas of late gadolinium enhancement. Synthetic Hct and ECV were calculated and intraclass correlation coefficient (ICC) and linear regression were used to compare measured and synthetic values. The mean age was 16.4 ± 6.4 years and mean left ventricular ejection fraction was 59% ± 9%. The mean measured Hct was 41.8 ± 3.0% compared to the mean synthetic Hct of 43.2% ± 2.9% (p < 0.001, ICC 0.46 [0.27, 0.52]) with the previously published model and 41.8% ± 1.4% (p < 0.001, ICC 0.28 [0.13, 0. 42]) with the locally-derived model. Mean measured mid-free wall ECV was 30.5% ± 4.8% and mean synthetic mid-free wall ECV of local model was 29.7% ± 4.6% (p < 0.001, ICC 0.93 [0.91, 0.95]). Correlations were not affected by heart rate and did not significantly differ in subpopulation analysis. While the ICC was strong, differences between measured and synthetic ECV ranged from -8.4% to 4.3% in the septum and -12.6% to 15.8% in the free wall. Using our laboratory's normal cut-off of 28.5%, 59 patients (37%) were miscategorized (53 false negatives, 6 false positives) with published model ECV. The local model had 37 miscategorizations (20 false negatives, 17 false positives), significantly fewer but still a substantial number (23%). Our data suggest that use of synthetic Hct for the
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
NASA Astrophysics Data System (ADS)
Orem, Caitlin A.; Pelletier, Jon D.
2016-11-01
Flood-envelope curves (FECs) are useful for constraining the upper limit of possible flood discharges within drainage basins in a particular hydroclimatic region. Their usefulness, however, is limited by their lack of a well-defined recurrence interval. In this study we use radar-derived precipitation estimates to develop an alternative to the FEC method, i.e., the frequency-magnitude-area-curve (FMAC) method that incorporates recurrence intervals. The FMAC method is demonstrated in two well-studied US drainage basins, i.e., the Upper and Lower Colorado River basins (UCRB and LCRB, respectively), using Stage III Next-Generation-Radar (NEXRAD) gridded products and the diffusion-wave flow-routing algorithm. The FMAC method can be applied worldwide using any radar-derived precipitation estimates. In the FMAC method, idealized basins of similar contributing area are grouped together for frequency-magnitude analysis of precipitation intensity. These data are then routed through the idealized drainage basins of different contributing areas, using contributing-area-specific estimates for channel slope and channel width. Our results show that FMACs of precipitation discharge are power-law functions of contributing area with an average exponent of 0.82 ± 0.06 for recurrence intervals from 10 to 500 years. We compare our FMACs to published FECs and find that for wet antecedent-moisture conditions, the 500-year FMAC of flood discharge in the UCRB is on par with the US FEC for contributing areas of ˜ 102 to 103 km2. FMACs of flood discharge for the LCRB exceed the published FEC for the LCRB for contributing areas in the range of ˜ 103 to 104 km2. The FMAC method retains the power of the FEC method for constraining flood hazards in basins that are ungauged or have short flood records, yet it has the added advantage that it includes recurrence-interval information necessary for estimating event probabilities.
Reducing errors in emergency surgery.
Watters, David A K; Truskett, Philip G
2013-06-01
Errors are to be expected in health care. Adverse events occur in around 10% of surgical patients and may be even more common in emergency surgery. There is little formal teaching on surgical error in surgical education and training programmes despite their frequency. This paper reviews surgical error and provides a classification system, to facilitate learning. The approach and language used to enable teaching about surgical error was developed through a review of key literature and consensus by the founding faculty of the Management of Surgical Emergencies course, currently delivered by General Surgeons Australia. Errors may be classified as being the result of commission, omission or inition. An error of inition is a failure of effort or will and is a failure of professionalism. The risk of error can be minimized by good situational awareness, matching perception to reality, and, during treatment, reassessing the patient, team and plan. It is important to recognize and acknowledge an error when it occurs and then to respond appropriately. The response will involve rectifying the error where possible but also disclosing, reporting and reviewing at a system level all the root causes. This should be done without shaming or blaming. However, the individual surgeon still needs to reflect on their own contribution and performance. A classification of surgical error has been developed that promotes understanding of how the error was generated, and utilizes a language that encourages reflection, reporting and response by surgeons and their teams. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.
Evaluation of GPS Standard Point Positioning with Various Ionospheric Error Mitigation Techniques
NASA Astrophysics Data System (ADS)
Panda, Sampad K.; Gedam, Shirish S.
2016-12-01
The present paper investigates accuracy of single and dual-frequency Global Positioning System (GPS) standard point positioning solutions employing different ionosphere error mitigation techniques. The total electron content (TEC) in the ionosphere is the prominent delay error source in GPS positioning, and its elimination is essential for obtaining a relatively precise positioning solution. The estimated delay error from different ionosphere models and maps, such as Klobuchar model, global ionosphere models, and vertical TEC maps are compared with the locally derived ionosphere error following the ion density and frequency dependence with delay error. Finally, the positional accuracy of the single and dual-frequency GPS point positioning solutions are probed through different ionospheric mitigation methods including exploitation of models, maps, and ionosphere-free linear combinations and removal of higher order ionospheric effects. The results suggest the superiority of global ionosphere maps for single-frequency solution, whereas for the dual-frequency measurement the ionosphere-free linear combination with prior removal of higher-order ionosphere effects from global ionosphere maps and geomagnetic reference fields resulted in improved positioning quality among the chosen mitigation techniques. Conspicuously, the susceptibility of height component to different ionospheric mitigation methods are demonstrated in this study which may assist the users in selecting appropriate technique for precise GPS positioning measurements.
Rudolph, Berenice M; Loquai, Carmen; Gerwe, Alexander; Bacher, Nicole; Steinbrink, Kerstin; Grabbe, Stephan; Tuettenberg, Andrea
2014-03-01
Myeloid-derived suppressor cells (MDSC) are a heterogeneous cell population characterized by immunosuppressive activity. Elevated levels of MDSC in peripheral blood are found in inflammatory diseases as well as in malignant tumors where they are supposed to be major contributors to mechanisms of tumor-associated tolerance. We investigated the frequency and function of MDSC in peripheral blood of melanoma patients and observed an accumulation of CD11b(+) CD33(+) CD14(+) HLA-DR(low) MDSC in all stages of disease (I-IV), including early stage I patients. Disease progression and enhanced tumor burden did not result in a further increase in frequencies or change in phenotype of MDSC. By investigation of specific MDSC-associated cytokines in patients' sera, we found an accumulation of IL-8 in all stages of disease. T-cell proliferation assays revealed that MDSC critically contribute to suppressed antigen-specific T-cell reactivity and thus might explain the frequently observed transient effects of immunotherapeutic strategies in melanoma patients.
Huang, Xiang; Cui, Shiyun; Shu, Yongqian
2016-02-01
The objective of this study was to investigate the immunomodulatory effect of cisplatin (DDP) on the frequency, phenotype and function of myeloid-derived suppressor cells (MDSC) in a murine B16 melanoma model. C57BL/6 mice were inoculated with B16 cells to establish the murine melanoma model and randomly received treatment with different doses of DDP. The percentages and phenotype of MDSC after DDP treatment were detected by flow cytometry. The immunoinhibitory function of MDSC was analyzed by assessing the immune responses of cocultured effector cells through CFSE-labeling assay, detection of interferon-γ production and MTT cytotoxic assay, respectively. Tumor growth and mice survival were monitored to evaluate the antitumor effect of combined DDP and adoptive cytokine-induced killer (CIK) cell therapy. DDP treatment selectively decreased the percentages, modulated the surface molecules and attenuated the immunoinhibitory effects of MDSC in murine melanoma model. The combination of DDP treatment and CIK therapy exerted synergistic antitumor effect against B16 melanoma. DDP treatment selectively downregulated the frequency and immunoinhibitory function of MDSC in B16 melanoma model, indicating the potential mechanisms mediating its immunomodulatory effect.
Køllgaard, Tania; Ugurel-Becker, Selma; Idorn, Manja; Andersen, Mads Hald; Becker, Jürgen C; Straten, Per Thor
2015-01-01
Various subsets of immune regulatory cells are suggested to influence the outcome of therapeutic antigen-specific anti-tumor vaccinations. We performed an exploratory analysis of a possible correlation of pre-vaccination Th17 cells, MDSCs, and Tregs with both vaccination-induced T-cell responses as well as clinical outcome in metastatic melanoma patients vaccinated with survivin-derived peptides. Notably, we observed dysfunctional Th1 and cytotoxic T cells, i.e. down-regulation of the CD3ζchain (p=0.001) and an impaired IFNγ-production (p=0.001) in patients compared to healthy donors, suggesting an altered activity of immune regulatory cells. Moreover, the frequencies of Th17 cells (p=0.03) and Tregs (p=0.02) were elevated as compared to healthy donors. IL-17-secreting CD4+ T cells displayed an impact on the immunological and clinical effects of vaccination: Patients characterized by high frequencies of Th17 cells at pre-vaccination were more likely to develop survivin-specific T-cell reactivity post-vaccination (p=0.03). Furthermore, the frequency of Th17 (p=0.09) and Th17/IFNγ+ (p=0.19) cells associated with patient survival after vaccination. In summary, our explorative, hypothesis-generating study demonstrated that immune regulatory cells, in particular Th17 cells, play a relevant role for generation of the vaccine-induced anti-tumor immunity in cancer patients, hence warranting further investigation to test for validity as predictive biomarkers.
Dross, Sandra E; Munson, Paul V; Kim, Se Eun; Bratt, Debra L; Tunggal, Hillary C; Gervassi, Ana L; Fuller, Deborah H; Horton, Helen
2017-01-15
During chronic lentiviral infection, poor clinical outcomes correlate both with systemic inflammation and poor proliferative ability of HIV-specific T cells; however, the connection between the two is not clear. Myeloid-derived suppressor cells (MDSC), which expand during states of elevated circulating inflammatory cytokines, may link the systemic inflammation and poor T cell function characteristic of lentiviral infections. Although MDSC are partially characterized in HIV and SIV infection, questions remain regarding their persistence, activity, and clinical significance. We monitored MDSC frequency and function in SIV-infected rhesus macaques. Low MDSC frequency was observed prior to SIV infection. Post-SIV infection, MDSC were elevated in acute infection and persisted during 7 mo of combination antiretroviral drug therapy (cART). After cART interruption, we observed MDSC expansion of surprising magnitude, the majority being granulocytic MDSC. At all stages of infection, granulocytic MDSC suppressed CD4+ and CD8+ T cell proliferation in response to polyclonal or SIV-specific stimulation. In addition, MDSC frequency correlated significantly with circulating inflammatory cytokines. Acute and post-cART levels of viremia were similar, however, the levels of inflammatory cytokines and MDSC were more pronounced post-cART. Expanded MDSC during SIV infection, especially during the post-cART inflammatory cytokine surge, likely limit cellular responses to infection. As many HIV curative strategies require cART interruption to determine efficacy, our work suggests treatment interruption-induced MDSC may especially undermine the effectiveness of such strategies. MDSC depletion may enhance T cell responses to lentiviral infection and the effectiveness of curative approaches. Copyright © 2017 by The American Association of Immunologists, Inc.
Operational Interventions to Maintenance Error
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki
1997-01-01
A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
Operational Interventions to Maintenance Error
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki
1997-01-01
A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
NASA Astrophysics Data System (ADS)
Michauk, Christine; Gauss, Jürgen
2007-07-01
An analytic scheme for the computation of scalar-relativistic corrections to nuclear forces is presented. Relativistic corrections are included via a perturbative treatment involving the mass-velocity and the one-electron and two-electron Darwin terms. Such a scheme requires mixed second derivatives of the nonrelativistic energy with respect to the relativistic perturbation and the nuclear coordinates and can be implemented using available second-derivative techniques. Our implementation for Hartree-Fock self-consistent field, second-order Møller-Plesset perturbation theory, as well as the coupled-cluster level is used to investigate the relativistic effects on the geometrical parameters and harmonic vibrational frequencies for a set of molecules containing light elements (HX, X =F, Cl, Br; H2X, X =O, S; HXY, X =O, S and Y =F, Cl, Br). The focus of our calculations is the basis-set dependence of the corresponding relativistic effects, additivity of electron correlation and relativistic effects, and the importance of core correlation on relativistic effects.
Michauk, Christine; Gauss, Jürgen
2007-07-28
An analytic scheme for the computation of scalar-relativistic corrections to nuclear forces is presented. Relativistic corrections are included via a perturbative treatment involving the mass-velocity and the one-electron and two-electron Darwin terms. Such a scheme requires mixed second derivatives of the nonrelativistic energy with respect to the relativistic perturbation and the nuclear coordinates and can be implemented using available second-derivative techniques. Our implementation for Hartree-Fock self-consistent field, second-order Moller-Plesset perturbation theory, as well as the coupled-cluster level is used to investigate the relativistic effects on the geometrical parameters and harmonic vibrational frequencies for a set of molecules containing light elements (HX, X=F, Cl, Br; H2X, X=O, S; HXY, X=O, S and Y=F, Cl, Br). The focus of our calculations is the basis-set dependence of the corresponding relativistic effects, additivity of electron correlation and relativistic effects, and the importance of core correlation on relativistic effects.
Sakurai, Tomonori; Narita, Eijiro; Shinohara, Naoki; Miyakoshi, Junji
2012-12-01
The increased use of induction heating (IH) cooktops in Japan and Europe has raised public concern on potential health effects of the magnetic fields generated by IH cooktops. In this study, we evaluated the effects of intermediate frequency (IF) magnetic fields generated by IH cooktops on gene expression profiles. Human fetus-derived astroglia cells were exposed to magnetic fields at 23 kHz and 100 µT(rms) for 2, 4, and 6 h and gene expression profiles in cells were assessed using cDNA microarray. There were no detectable effects of the IF magnetic fields at 23 kHz on the gene expression profile, whereas the heat treatment at 43 °C for 2 h, as a positive control, affected gene expression including inducing heat shock proteins. Principal component analysis and hierarchical analysis showed that the gene profiles of IF-exposed groups were similar to the sham-exposed group and were different than the heat treatment group. These results demonstrated that exposure of human fetus-derived astroglia cells to an IF magnetic field at 23 kHz and 100 µT(rms) for up to 6 h did not induce detectable changes in gene expression profile.
Phase velocity limit of high-frequency photon density waves
NASA Astrophysics Data System (ADS)
Haskell, Richard C.; Svaasand, Lars O.; Madsen, Sten; Rojas, Fabio E.; Feng, T.-C.; Tromberg, Bruce J.
1995-05-01
In frequency-domain photon migration (FDPM), two factors make high modulation frequencies desirable. First, with frequencies as high as a few GHz, the phase lag versus frequency plot has sufficient curvature to yield both the scattering and absorption coefficients of the tissue under examination. Second, because of increased attenuation, high frequency photon density waves probe smaller volumes, an asset in small volume in vivo or in vitro studies. This trend toward higher modulation frequencies has led us to re-examine the derivation of the standard diffusion equation (SDE) from the Boltzman transport equation. We find that a second-order time-derivative term, ordinarily neglected in the derivation, can be significant above 1 GHz for some biological tissue. The revised diffusion equation, including the second-order time-derivative, is often termed the P1 equation. We compare the dispersion relation of the P1 equation with that of the SDE. The P1 phase velocity is slower than that predicted by the SDE; in fact, the SDE phase velocity is unbounded with increasing modulation frequency, while the P1 phase velocity approaches c/sqrt(3) is attained only at modulation frequencies with periods shorter than the mean time between scatterings of a photon, a frequency regime that probes the medium beyond the applicability of diffusion theory. Finally we caution that values for optical properties deduced from FDPM data at high frequencies using the SDE can be in error by 30% or more.
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat; Peleg, Nadav; Mei, Yiwen; Anagnostou, Emmanouil N.
2016-04-01
Intensity-duration-frequency (IDF) curves are used in flood risk management and hydrological design studies to relate the characteristics of a rainfall event to the probability of its occurrence. The usual approach relies on long records of raingauge data providing accurate estimates of the IDF curves for a specific location, but whose representativeness decreases with distance. Radar rainfall estimates have recently been tested over the Eastern Mediterranean area, characterized by steep climatological gradients, showing that radar IDF curves generally lay within the raingauge confidence interval and that radar is able to identify the climatology of extremes. Recent availability of relatively long records (>15 years) of high resolution satellite rainfall information allows to explore the spatial distribution of extreme rainfall with increased detail over wide areas, thus providing new perspectives for the study of precipitation regimes and promising both practical and theoretical implications. This study aims to (i) identify IDF curves obtained from radar rainfall estimates and (ii) identify and assess IDF curves obtained from two high resolution satellite retrieval algorithms (CMORPH and PERSIANN) over the Eastern Mediterranean region. To do so, we derive IDF curves fitting a GEV distribution to the annual maxima series from 23 years (1990-2013) of carefully corrected data from a C-Band radar located in Israel (covering Mediterranean to arid climates) as well as from 15 years (1998-2014) of gauge-adjusted high-resolution CMORPH and 10 years (2003-2013) of gauge-adjusted high-resolution PERSIANN data. We present the obtained IDF curves and we compare the curves obtained from the satellite algorithms to the ones obtained from the radar during overlapping periods; this analysis will draw conclusions on the reliability of the two satellite datasets for deriving rainfall frequency analysis over the region and provide IDF corrections. We compare then the curves obtained
Study of geopotential error models used in orbit determination error analysis
NASA Technical Reports Server (NTRS)
Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.
1991-01-01
The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic
NASA Astrophysics Data System (ADS)
Takemura, Shunsuke; Furumura, Takashi
2013-04-01
We studied the scattering properties of high-frequency seismic waves due to the distribution of small-scale velocity fluctuations in the crust and upper mantle beneath Japan based on an analysis of three-component short-period seismograms and comparison with finite difference method (FDM) simulation of seismic wave propagation using various stochastic random velocity fluctuation models. Using a large number of dense High-Sensitivity Seismograph network waveform data of 310 shallow crustal earthquakes, we examined the P-wave energy partition of transverse component (PEPT), which is caused by scattering of the seismic wave in heterogeneous structure, as a function of frequency and hypocentral distances. At distance of less than D = 150 km, the PEPT increases with increasing frequency and is approximately constant in the range of from D = 50 to 150 km. The PEPT was found to increase suddenly at a distance of over D = 150 km and was larger in the high-frequency band (f > 4 Hz). Therefore, strong scattering of P wave may occur around the propagation path (upper crust, lower crust and around Moho discontinuity) of the P-wave first arrival phase at distances of larger than D = 150 km. We also found a regional difference in the PEPT value, whereby the PEPT value is large at the backarc side of northeastern Japan compared with southwestern Japan and the forearc side of northeastern Japan. These PEPT results, which were derived from shallow earthquakes, indicate that the shallow structure of heterogeneity at the backarc side of northeastern Japan is stronger and more complex compared with other areas. These hypotheses, that is, the depth and regional change of small-scale velocity fluctuations, are examined by 3-D FDM simulation using various heterogeneous structure models. By comparing the observed feature of the PEPT with simulation results, we found that strong seismic wave scattering occurs in the lower crust due to relatively higher velocity and stronger heterogeneities
Single antenna phase errors for NAVSPASUR receivers
NASA Astrophysics Data System (ADS)
Andrew, M. D.; Wadiak, E. J.
1988-11-01
Interferometrics Inc. has investigated the phase errors on single antenna NAVSPASUR data. We find that the single antenna phase errors are well modeled as a function of signal strength only. The phase errors associated with data from the Kickapoo transmitter are larger than the errors from the low-power transmitters (i.e., Gila River and Jordan Lake). Further, the errors in the phase data associated with the Kickapoo transmitter show significant variability among data taken on different days. We have applied a quadratic polynomial fit to the single antenna phases to derive the Doppler shift and chirp, and we have estimated the formal errors associated with these quantities. These formal errors have been parameterized as a function of peak signal strength and number of data frames. We find that for a typical satellite observation the derived Doppler shift has a formal error of approx. 0.2 Hz and the derived chirp has a formal error of 0 less than or approx. 1 Hz/sec. There is a clear systematic bias in the derived chirp for targets illuminated by the Kickapoo transmitter. Near-field effects probably account for the larger phase errors and the chirp bias of the Kickapoo transmitter.
Computing Instantaneous Frequency by normalizing Hilbert Transform
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2005-01-01
This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.
Computing Instantaneous Frequency by normalizing Hilbert Transform
Huang, Norden E.
2005-05-31
This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.
NASA Technical Reports Server (NTRS)
Blucker, T. J.; Ferry, W. W.
1971-01-01
An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.
NASA Astrophysics Data System (ADS)
Brasington, J.; Hicks, M.; Wheaton, J. M.; Williams, R. D.; Vericat, D.
2013-12-01
Repeat surveys of channel morphology provide a means to quantify fluvial sediment storage and enable inferences about changes in long-term sediment supply, watershed delivery and bed level adjustment; information vital to support effective river and land management. Over shorter time-scales, direct differencing of fluvial terrain models may also offer a route to predict reach-averaged sediment transport rates and quantify the patterns of channel morphodynamics and the processes that force them. Recent and rapid advances in geomatics have facilitated these goals by enabling the acquisition of topographic data at spatial resolutions and precisions suitable for characterising river morphology at the scale of individual grains over multi-kilometre reaches. Despite improvements in topographic surveying, inverting the terms of the sediment budget to derive estimates of sediment transport and link these to morphodynamic processes is, nonetheless, often confounded by limited knowledge of either the sediment supply or efflux across a boundary of the control volume, or unobserved cut-and-fill taking place between surveys. This latter problem is particularly poorly constrained, as field logistics frequently preclude surveys at a temporal frequency sufficient to capture changes in sediment storage associated with each competent event, let alone changes during individual floods. In this paper, we attempt to quantify the principal sources of uncertainty in morphologically-derived bedload transport rates for the large, labile, gravel-bed braided Rees River which drains the Southern Alps of NZ. During the austral summer of 2009-10, a unique timeseries of 10 high quality DEMs was derived for a 3 x 0.7 km reach of the Rees, using a combination of mobile terrestrial laser scanning, aDcp soundings and aerial image analysis. Complementary measurements of the forcing flood discharges and estimates of event-based particle step lengths were also acquired during the field campaign
Unforced errors and error reduction in tennis
Brody, H
2006-01-01
Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568
Unforced errors and error reduction in tennis.
Brody, H
2006-05-01
Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors.
Rubin, G; George, A; Chinn, D; Richardson, C
2003-01-01
Objective: To describe a classification of errors and to assess the feasibility and acceptability of a method for recording staff reported errors in general practice. Design: An iterative process in a pilot practice was used to develop a classification of errors. This was incorporated in an anonymous self-report form which was then used to collect information on errors during June 2002. The acceptability of the reporting process was assessed using a self-completion questionnaire. Setting: UK general practice. Participants: Ten general practices in the North East of England. Main outcome measures: Classification of errors, frequency of errors, error rates per 1000 appointments, acceptability of the process to participants. Results: 101 events were used to create an initial error classification. This contained six categories: prescriptions, communication, appointments, equipment, clinical care, and "other" errors. Subsequently, 940 errors were recorded in a single 2 week period from 10 practices, providing additional information. 42% (397/940) were related to prescriptions, although only 6% (22/397) of these were medication errors. Communication errors accounted for 30% (282/940) of errors and clinical errors 3% (24/940). The overall error rate was 75.6/1000 appointments (95% CI 71 to 80). The method of error reporting was found to be acceptable by 68% (36/53) of respondents with only 8% (4/53) finding the process threatening. Conclusion: We have developed a classification of errors and described a practical and acceptable method for reporting them that can be used as part of the process of risk management. Errors are common and, although all have the potential to lead to an adverse event, most are administrative. PMID:14645760
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w_{r} in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w_{r}/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Identification errors in pathology and laboratory medicine.
Valenstein, Paul N; Sirota, Ronald L
2004-12-01
Identification errors involve misidentification of a patient or a specimen. Either has the potential to cause patients harm. Identification errors can occur during any part of the test cycle; however, most occur in the preanalytic phase. Patient identification errors in transfusion medicine occur in 0.05% of specimens; for general laboratory specimens the rate is much higher, around 1%. Anatomic pathology, which involves multiple specimen transfers and hand-offs, may have the highest identification error rate. Certain unavoidable cognitive failures lead to identification errors. Technology, ranging from bar-coded specimen labels to radio frequency identification tags, can be incorporated into protective systems that have the potential to detect and correct human error and reduce the frequency with which patients and specimens are misidentified.
Remediating Common Math Errors.
ERIC Educational Resources Information Center
Wagner, Rudolph F.
1981-01-01
Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)
Feedback Error Learning with Insufficient Excitation
NASA Astrophysics Data System (ADS)
Alali, Basel; Hirata, Kentaro; Sugimoto, Kenji
This letter studies the tracking error in Multi-input Multi-output Feedback Error Learning (MIMO-FEL) system having insufficient excitation. It is shown that the error converges to zero exponentially even if the reference signal lacks the persistently excitation (PE) condition. Furthermore, by making full use of this fast convergence, we estimate the plant parameter while in operation based on frequency response. Simulation results show the effectiveness of the proposed method compared to a conventional approach.
Qiu, Feiyuan; He, Xueling; Yao, Xiaolin; Li, Kai; Kuang, Wei; Wu, Wenchao; Li, Liang
2012-06-01
Mesenchymal stem cells (MSCs) are multipotent stem cells that differentiate into a variety of cell types. Low frequency pulsed electromagnetic fields (LFPEMFs) therapy can causes biochemical changes at the cellular level to accelerate tissue repair in mammals. So, we tested the hypothesis that LFPEMFs can promote chondrogenic differentiation of rat bone marrow-derived mesenchymal stem cells (rBMSCs) in vitro. The rBMSCs were isolated by adherence method and the third-generation of the rBMSCs were randomly divided into LFPEMFs groups, chondrocyte-induced group and control group. LFPEMFs groups with complete medium were exposed to 50Hz, 1mT PEMFs for 30 min every day, lasting for 10, 15 and 20 d, respectively. Chondrocyte-induced group were treated with chondrogenic media, while control groups were only cultured with complete medium. The mRNA expressions of type II-collagen (Col II) and aggrecan were determined by Real-time fluorescent quantitation PCR. The protein expression of Col II and aggrecan were detected with toluidine blue stain or immunocytochemical stain, respectively. The result showed that the mRNA and protein expression level of Col-II and aggrecan were significantly higher in the LFPEMFs group or chondrocyte-induced group, compared to the control group. It suggest that LFPEMFs could contribute to rBMSCs to differentiate into chondrogenic differentiation in vitro.
NASA Astrophysics Data System (ADS)
Zhang, Xiaotong; Van de Moortele, Pierre-Francois; Liu, Jiaen; Schmitter, Sebastian; He, Bin
2014-12-01
Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate the feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.
Zhang, Xiaotong; Liu, Jiaen; Van de Moortele, Pierre-Francois; Schmitter, Sebastian; He, Bin
2014-12-15
Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate the feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.
Motion measurement errors and autofocus in bistatic SAR.
Rigling, Brian D; Moses, Randolph L
2006-04-01
This paper discusses the effect of motion measurement errors (MMEs) on measured bistatic synthetic aperture radar (SAR) phase history data that has been motion compensated to the scene origin. We characterize the effect of low-frequency MMEs on bistatic SAR images, and, based on this characterization, we derive limits on the allowable MMEs to be used as system specifications. Finally, we demonstrate that proper orientation of a bistatic SAR image during the image formation process allows application of monostatic SAR autofocus algorithms in postprocessing to mitigate image defocus.
A Review of Errors in the Journal Abstract
ERIC Educational Resources Information Center
Lee, Eunpyo; Kim, Eun-Kyung
2013-01-01
(percentage) of abstracts that involved with errors, the most erroneous part of the abstract, and the types and frequency of errors. Also the purpose expanded to compare the results with those of the previous…
Error analysis of quartz crystal resonator applications
Lucklum, R.; Behling, C.; Hauptmann, P.; Cernosek, R.W.; Martin, S.J.
1996-12-31
Quartz crystal resonators in chemical sensing applications are usually configured as the frequency determining element of an electrical oscillator. By contrast, the shear modulus determination of a polymer coating needs a complete impedance analysis. The first part of this contribution reports the error made if common approximations are used to relate the frequency shift to the sorbed mass. In the second part the authors discuss different error sources in the procedure to determine shear parameters.
Errors associated with outpatient computerized prescribing systems
Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G
2011-01-01
Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428
Chua, S S; Tea, M H; Rahman, M H A
2009-04-01
Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.
Impact of Measurement Error on Synchrophasor Applications
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.
Characteristics and costs of surgical scheduling errors.
Wu, Rebecca L; Aufses, Arthur H
2012-10-01
Errors that increase the risk of wrong-side/-site procedures not only occur the day of surgery but also are often introduced much earlier during the scheduling process. The frequency of these booking errors and their effects are unclear. All surgical scheduling errors reported in the institution's medical event reporting system from January 1, 2011, to July 31, 2011, were analyzed. Focus groups with operating room nurses were held to discuss delays caused by scheduling errors. Of 17,606 surgeries, there were 151 (.86%) booking errors. The most common errors were wrong side (55, 36%), incomplete (38, 25%), and wrong approach (25, 17%). Focus group participants said incomplete and wrong-approach bookings resulted in the longest delays, averaging 20 minutes and costing at least $320. Although infrequent, scheduling errors disrupt operating room team dynamics, causing delays and bearing substantial costs. Further research is necessary to develop tools for more accurate scheduling. Copyright © 2012 Elsevier Inc. All rights reserved.
Evaluating the impact of genotype errors on rare variant tests of association
Cook, Kaitlyn; Benitez, Alejandra; Fu, Casey; Tintle, Nathan
2014-01-01
The new class of rare variant tests has usually been evaluated assuming perfect genotype information. In reality, rare variant genotypes may be incorrect, and so rare variant tests should be robust to imperfect data. Errors and uncertainty in SNP genotyping are already known to dramatically impact statistical power for single marker tests on common variants and, in some cases, inflate the type I error rate. Recent results show that uncertainty in genotype calls derived from sequencing reads are dependent on several factors, including read depth, calling algorithm, number of alleles present in the sample, and the frequency at which an allele segregates in the population. We have recently proposed a general framework for the evaluation and investigation of rare variant tests of association, classifying most rare variant tests into one of two broad categories (length or joint tests). We use this framework to relate factors affecting genotype uncertainty to the power and type I error rate of rare variant tests. We find that non-differential genotype errors (an error process that occurs independent of phenotype) decrease power, with larger decreases for extremely rare variants, and for the common homozygote to heterozygote error. Differential genotype errors (an error process that is associated with phenotype status), lead to inflated type I error rates which are more likely to occur at sites with more common homozygote to heterozygote errors than vice versa. Finally, our work suggests that certain rare variant tests and study designs may be more robust to the inclusion of genotype errors. Further work is needed to directly integrate genotype calling algorithm decisions, study costs and test statistic choices to provide comprehensive design and analysis advice which appropriately accounts for the impact of genotype errors. PMID:24744770
NASA Astrophysics Data System (ADS)
Astin, Ivan
This survey considers those studies conducted into estimating errors in satellite derived large scale space-time means (of the order of 250 km by 250 km by a month) for rainfall, cloud cover, sea surface processes and the Earth''s radiation budget, resulting from their incomplete coverage of the space-time volume over which the mean is evaluated. Many of these studies have focused on estimating the errors in space-time means post satellite launch and compare mean data derived from such satellites with that from an independent data set. Pre-launch studies tend to involve computer simulations of a satellite overflying and sampling from an existing data set and hence the two approaches give values for sampling errors for specific cases. However, more generic sampling papers exist that allow the exact evaluation of sampling errors for any instrument or combination of instruments if their sampling characteristics and the auto-correlation of the parameter field are known. These generic and simulation techniques have been used together on the same data sets and are found to give very similar values for the sampling error and are presented. Also considered are studies in which data from several satellites, or satellite and ground based measurements are combined to improve estimates in the above means. This improvement being brought about not only by increased spatial and temporal coverage but also by a reduction in retrieval error.
GP-B error modeling and analysis
NASA Technical Reports Server (NTRS)
1984-01-01
The analysis and modeling for the Gravity Probe B (GP-B) experiment is reported. The finite-wordlength induced errors in Kalman filtering computation were refined. Errors in the crude result were corrected, improved derivation steps are taken, and better justifications are given. The errors associated with the suppression of the 1-noise were analyzed by rolling the spacecraft and then performing a derolling operation by computation.
NASA Astrophysics Data System (ADS)
Orem, C. A.; Pelletier, J. D.
2012-12-01
Flood-envelope curves, i.e. plots of measured flood discharges versus contributing area for many drainage basins in a given hydroclimatic region, are useful for constraining the upper limit of possible discharges in that region. Their usefulness, however, is limited by the lack of recurrence interval information. In this study, we show that frequency-magnitude-area (FMA) curves can be constructed for precipitation and flood discharges using Stage III Next-Generation Radar (NEXRAD) precipitation estimates and flow-routing algorithms. These FMA curves constrain extreme flood discharges in drainage basins within a region and also provide recurrence interval information. The methods in this study follow the flood-envelope curve approach in that drainage basins of similar size are grouped and data aggregated into one population. We improve on the flood-envelope curve approach by assigning a recurrence interval and errors to flood magnitudes based on a large population of NEXRAD observations taken over time and space. We demonstrate the application of these methods by quantifying the FMA curves for the Upper and Lower Colorado River Basins. Results show that areally-averaged precipitation rates are power-law functions of drainage basin area for a wide range of recurrence intervals. Regression analyses give an average exponent of approximately 0.77 ± 0.04. FMA curves of flood discharges are not power-law functions of area, but instead exhibit the characteristic concave-down shape of published flood-envelope curves in log-log space. The concave-down shape is due to both hydrodynamic and geomorphic dispersion, but not limitations on the increase in precipitation with increasing area, as evidenced by the power-law relationship between area and precipitation rate. Flood discharges calculated by our method are comparable to, but slightly higher than, those reported in the literature for our study regions, suggesting that previously published flood-envelope curves for these
Radley, David C; Wasserman, Melanie R; Olsho, Lauren Ew; Shoemaker, Sarah J; Spranca, Mark D; Bradshaw, Bethany
2013-05-01
Medication errors in hospitals are common, expensive, and sometimes harmful to patients. This study's objective was to derive a nationally representative estimate of medication error reduction in hospitals attributable to electronic prescribing through computerized provider order entry (CPOE) systems. We conducted a systematic literature review and applied random-effects meta-analytic techniques to derive a summary estimate of the effect of CPOE on medication errors. This pooled estimate was combined with data from the 2006 American Society of Health-System Pharmacists Annual Survey, the 2007 American Hospital Association Annual Survey, and the latter's 2008 Electronic Health Record Adoption Database supplement to estimate the percentage and absolute reduction in medication errors attributable to CPOE. Processing a prescription drug order through a CPOE system decreases the likelihood of error on that order by 48% (95% CI 41% to 55%). Given this effect size, and the degree of CPOE adoption and use in hospitals in 2008, we estimate a 12.5% reduction in medication errors, or ∼17.4 million medication errors averted in the USA in 1 year. Our findings suggest that CPOE can substantially reduce the frequency of medication errors in inpatient acute-care settings; however, it is unclear whether this translates into reduced harm for patients. Despite CPOE systems' effectiveness at preventing medication errors, adoption and use in US hospitals remain modest. Current policies to increase CPOE adoption and use will likely prevent millions of additional medication errors each year. Further research is needed to better characterize links to patient harm.
Diagnostic errors in emergency departments.
Tudela, Pere; Carreres, Anna; Ballester, Mònica
2017-08-22
Diagnostic errors have to be recognised as a possible adverse event inherent to clinical activity and incorporate them as another quality indicator. Different sources of information report their frequency, although they may still be underestimated. Contrary to what one could expect, in most cases, it does not occur in infrequent diseases. Causes can be complex and multifactorial, with individual cognitive aspects, as well as the health system. These errors can have an important clinical and socioeconomic impact. It is necessary to learn from diagnostic errors in order to develop an accurate and reliable system with a high standard of quality. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Santos, S; Oliveira, A; Pinho, C; Casal, S; Lopes, C
2014-05-01
Evidence on the association between fatty acids and adiponectin and leptin concentrations is scarce and inconsistent, which may in part be due to limitations of dietary reporting methods. We aimed to estimate the association of fatty acids, derived from a food frequency questionnaire (FFQ) and measured in the erythrocyte membrane, with adiponectin and leptin concentrations. We studied 330 non-institutionalized inhabitants of Porto (52.4% women; age range: 26-64 years) evaluated in 2010-2011, as part of the EPIPorto cohort study. Fatty acids were derived from a validated semiquantitative FFQ and measured in the erythrocyte membrane by gas chromatography. Serum concentrations of adiponectin and leptin were determined through radioimmunoassay. Regression coefficients (β) and 95% confidence intervals (95% CI) were obtained from linear regression models, after controlling for gender, age, education, leisure time physical activity and total body fat percentage (obtained from dual energy X-ray absorptiometry). Fatty acids measured by FFQ showed no significant associations with both adipokines. Lauric and linoleic acids, measured in the erythrocyte membrane, were significantly and positively associated with adiponectin (β=0.292, 95% CI: 0.168-0.416; β=0.150, 95% CI: 0.020-0.280) and leptin (β=0.071, 95% CI: 0.003-0.138; β=0.071, 95% CI: 0.002-0.140), whereas total n-3, eicosapentaenoic and docosahexaenoic acids were significantly but negatively associated with adiponectin (β=-0.289, 95% CI: -0.420 to -0.159; β=-0.174, 95% CI -0.307 to -0.040; β=-0.253, 95% CI -0.383 to -0.124) and leptin (β=-0.151, 95% CI: -0.220 to -0.083; β=-0.080, 95% CI: -0.151 to -0.009; β=-0.146, 95% CI: -0.214 to -0.078). Positive significant associations of palmitic and trans-fatty acids with adiponectin were also observed. A positive association of lauric and linoleic acids and a negative association of total n-3 fatty acids with both adipokines were observed only with fatty acids
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
AUTOMATIC FREQUENCY CONTROL SYSTEM
Hansen, C.F.; Salisbury, J.D.
1961-01-10
A control is described for automatically matching the frequency of a resonant cavity to that of a driving oscillator. The driving oscillator is disconnected from the cavity and a secondary oscillator is actuated in which the cavity is the frequency determining element. A low frequency is mixed with the output of the driving oscillator and the resultant lower and upper sidebands are separately derived. The frequencies of the sidebands are compared with the secondary oscillator frequency. deriving a servo control signal to adjust a tuning element in the cavity and matching the cavity frequency to that of the driving oscillator. The driving oscillator may then be connected to the cavity.
Adjoint Error Estimation for Linear Advection
Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S
2011-03-30
An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.
Cayot-Constantin, S; Constantin, J-M; Perez, J-P; Chevallier, P; Clapson, P; Bazin, J-E
2010-03-01
To assess the usefulness and the feasibility to use a software supervising continuous infusion rates of drugs administered with pumps in ICU. Follow-up of practices and inquiry in three intensive care units. Guardrails software(TM) of reassurance of the regulations of the rates of pumps (AsenaGH, Alaris). First, evaluation and quantification of the number of infusion-rates adjustments reaching the maximal superior limit (considered as infusion-rate-errors stopped by the software). Secondly, appreciate the acceptance by staffs to such a system by a blinded questionnaire and a quantification of the number of dataset pumps programs performed with the software. The number of administrations started with the pumps of the study in the three services (11 beds) during the period of study was 63,069 and 42,694 of them (67.7 %) used the software. The number of potential errors of continuous infusion rates was 11, corresponding to a rate of infusion-rate errors of 26/100,000. KCl and insulin were concerned in two and five cases, respectively. Eighty percent of the nurses estimated that infusion-rate-errors were rare or exceptional but potentially harmful. Indeed, they considered that software supervising the continuous infusion rates of pumps could improve safety. The risk of infusion-rate-errors of drugs administered continuously with pump in ICU is rare but potentially harmful. A software that controlled the continuous infusion rates could be useful. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
Fathi, Ezzatollah; Farahzadi, Raheleh; Rahbarghazi, Reza; Samadi Kafil, Hossein; Yolmeh, Rahman
2017-01-01
Zinc as an essential trace element was reported to be involved in regulation of the growth and aging of cells. In this study, rat adipose-derived mesenchymal stem cells were exposed to extremely low frequency electromagnetic field (ELF-EMF) of 50 Hz and 20 mT to evaluate whether exposure to ELF-EMF in the presence of zinc sulfate (ZnSO4) affects the telomerase reverse transcriptase (TERT) gene expression and aging in mesenchymal stem cells (MSCs). The cell plates were divided into four groups including group I (control without ZnSO4 and ELF-EMF exposure); group II (ELF-EMF-exposure without ZnSO4); group III (ZnSO4 treatment without ELF-EMF exposure) and group ІV (ELF-EMF exposure with ZnSO4). In the presence of different concentrations of ZnSO4, cells viability, TERT gene expression and percentage of senescent cells were evaluated using colorimetric assay, real-time PCR and senescence-associated β-galactosidase activity assay, respectively. In this experiment, cells were exposed to ELF-EMF for 30 min per day for 21 days in the presence and absence of ZnSO4. The results revealed that ELF-EMF leads to a decrease in the expression of TERT gene and increase in the percentage of senescent cells. However, the ZnSO4 could significantly increase the TERT gene expression and decrease the aging of ELF-EMF-exposed MSCs. It seems that ZnSO4 may be a beneficial agent to delay aging of ELF-EMF-exposed MSCs due to the induction of TERT gene expression.
Fathi, Ezzatollah; Farahzadi, Raheleh; Rahbarghazi, Reza; Samadi Kafil, Hossein; Yolmeh, Rahman
2017-01-01
Zinc as an essential trace element was reported to be involved in regulation of the growth and aging of cells. In this study, rat adipose-derived mesenchymal stem cells were exposed to extremely low frequency electromagnetic field (ELF-EMF) of 50 Hz and 20 mT to evaluate whether exposure to ELF-EMF in the presence of zinc sulfate (ZnSO4) affects the telomerase reverse transcriptase (TERT) gene expression and aging in mesenchymal stem cells (MSCs). The cell plates were divided into four groups including group I (control without ZnSO4 and ELF-EMF exposure); group II (ELF-EMF-exposure without ZnSO4); group III (ZnSO4 treatment without ELF-EMF exposure) and group ІV (ELF-EMF exposure with ZnSO4). In the presence of different concentrations of ZnSO4, cells viability, TERT gene expression and percentage of senescent cells were evaluated using colorimetric assay, real-time PCR and senescence-associated β-galactosidase activity assay, respectively. In this experiment, cells were exposed to ELF-EMF for 30 min per day for 21 days in the presence and absence of ZnSO4. The results revealed that ELF-EMF leads to a decrease in the expression of TERT gene and increase in the percentage of senescent cells. However, the ZnSO4 could significantly increase the TERT gene expression and decrease the aging of ELF-EMF-exposed MSCs. It seems that ZnSO4 may be a beneficial agent to delay aging of ELF-EMF-exposed MSCs due to the induction of TERT gene expression. PMID:28785382
NASA Technical Reports Server (NTRS)
Lichtenstein, Jacob H.; Williams, James L.
1961-01-01
A low-speed investigation has been conducted in the Langley stability tunnel to study the effects of frequency and amplitude of sideslipping motion on the lateral stability derivatives of a 60 deg. delta wing, a 45 deg. sweptback wing, and an unswept wing. The investigation was made for values of the reduced-frequency parameter of 0.066 and 0.218 and for a range of amplitudes from +/- 2 to +/- 6 deg. The results of the investigation indicated that increasing the frequency of the oscillation generally produced an appreciable change in magnitude of the lateral oscillatory stability derivatives in the higher angle-of-attack range. This effect was greatest for the 60 deg. delta wing and smallest for the unswept wing and generally resulted in a more linear variation of these derivatives with angle of attack. For the relatively high frequency at which the amplitude was varied, there appeared to be little effect on the measured derivatives as a result of the change in amplitude of the oscillation.
Medication errors: prescribing faults and prescription errors
Velo, Giampaolo P; Minuz, Pietro
2009-01-01
Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically. PMID:19594530
ERIC Educational Resources Information Center
Gressang, Jane E.
2010-01-01
Second language (L2) learners notoriously have trouble using articles in their target languages (e.g., "a", "an", "the" in English). However, researchers disagree about the patterns and causes of these errors. Past studies have found that L2 English learners: (1) Predominantly omit articles (White 2003, Robertson 2000), (2) Overuse "the" (Huebner…
Frequentist Standard Errors of Bayes Estimators.
Lee, DongHyuk; Carroll, Raymond J; Sinha, Samiran
2017-09-01
Frequentist standard errors are a measure of uncertainty of an estimator, and the basis for statistical inferences. Frequestist standard errors can also be derived for Bayes estimators. However, except in special cases, the computation of the standard error of Bayesian estimators requires bootstrapping, which in combination with Markov chain Monte Carlo (MCMC) can be highly time consuming. We discuss an alternative approach for computing frequentist standard errors of Bayesian estimators, including importance sampling. Through several numerical examples we show that our approach can be much more computationally efficient than the standard bootstrap.
Permanence analysis of a concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.; Kasami, T.
1983-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.
Elliott, C.J.; McVey, B. ); Quimby, D.C. )
1990-01-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Prescribing Errors Involving Medication Dosage Forms
Lesar, Timothy S
2002-01-01
CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138
Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...
Accounting for Interlanguage Errors
ERIC Educational Resources Information Center
Benetti, Jean N.
1978-01-01
A study was conducted to test various explanations of the error of unmarked noun plurals made by first generation Italian immigrants. The error appeared to be "fossilized" or not eradicated over a period of time. (SW)
Ainsworth, Nathan G; Grijalva, Prof. Santiago
2013-01-01
This paper discusses a proposed frequency restoration controller which operates as an outer loop to frequency droop for voltage-source inverters. By quasi-equilibrium analysis, we show that the proposed controller is able to provide arbitrarily small steady-state frequency error while maintaing power sharing between inverters without need for communication or centralized control. We derive rate of convergence, discuss design considerations (including a fundamental trade-off that must be made in design), present a design procedure to meet a maximum frequency error requirement, and show simulation results verifying our analysis and design method. The proposed controller will allow flexible plug-and-play inverter-based networks to meet a specified maximum frequency error requirement.
2017-04-01
to create the optical beat frequencies measured by the heterodyne system and the associated drift of their center frequencies. This drift was...nonoscillating speaker cone. To assess the effects associated with the thermal drift of the 2 lasers, and their deviation from an idealized...demonstrates the methodology toward how the US Army Research Laboratory’s scientists could use FCPDV to eliminate the directional ambiguity associated with
Drug Errors in Anaesthesiology
Jain, Rajnish Kumar; Katiyar, Sarika
2009-01-01
Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103
An error management system in a veterinary clinical laboratory.
Hooijberg, Emma; Leidinger, Ernst; Freeman, Kathleen P
2012-05-01
Error recording and management is an integral part of a clinical laboratory quality management system. Analysis and review of recorded errors lead to corrective and preventive actions through modification of existing processes and, ultimately, to quality improvement. Laboratory errors can be divided into preanalytical, analytical, and postanalytical errors depending on where in the laboratory cycle the errors occur. The purpose of the current report is to introduce an error management system in use in a veterinary diagnostic laboratory as well as to examine the amount and types of error recorded during the 8-year period from 2003 to 2010. Annual error reports generated during this period by the error recording system were reviewed, and annual error rates were calculated. In addition, errors were divided into preanalytical, analytical, postanalytical, and "other" categories, and their frequency was examined. Data were further compared to that available from human diagnostic laboratories. Finally, sigma metrics were calculated for the various error categories. Annual error rates per total number of samples ranged from 1.3% in 2003 to 0.7% in 2010. Preanalytical errors ranged from 52% to 77%, analytical from 4% to 14%, postanalytical from 9% to 21%, and other error from 6% to 19% of total errors. Sigma metrics ranged from 4.1 to 4.7. All data were comparable to that reported in human clinical laboratories. The incremental annual reduction of error shows that use of an error management system led to quality improvement.
Medication errors: definitions and classification
Aronson, Jeffrey K
2009-01-01
To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526
Medication errors: definitions and classification.
Aronson, Jeffrey K
2009-06-01
1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.
Performance Errors in Weight Training and Their Correction.
ERIC Educational Resources Information Center
Downing, John H.; Lander, Jeffrey E.
2002-01-01
Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…
Performance Errors in Weight Training and Their Correction.
ERIC Educational Resources Information Center
Downing, John H.; Lander, Jeffrey E.
2002-01-01
Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…
The Nature of Error in Adolescent Student Writing
ERIC Educational Resources Information Center
Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang
2014-01-01
This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…
Regression calibration with heteroscedastic error variance.
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.
Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A
2010-05-01
Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.
Error analysis of compensation cutting technique for wavefront error of KH2PO4 crystal.
Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Zhu, Dengchao; Song, Bing
2013-09-20
Considering the wavefront error of KH(2)PO(4) (KDP) crystal is difficult to control through face fly cutting process because of surface shape deformation during vacuum suction, an error compensation technique based on a spiral turning method is put forward. An in situ measurement device is applied to measure the deformed surface shape after vacuum suction, and the initial surface figure error, which is obtained off-line, is added to the in situ surface shape to obtain the final surface figure to be compensated. Then a three-axis servo technique is utilized to cut the final surface shape. In traditional cutting processes, in addition to common error sources such as the error in the straightness of guide ways, spindle rotation error, and error caused by ambient environment variance, three other errors, the in situ measurement error, position deviation error, and servo-following error, are the main sources affecting compensation accuracy. This paper discusses the effect of these three errors on compensation accuracy and provides strategies to improve the final surface quality. Experimental verification was carried out on one piece of KDP crystal with the size of Φ270 mm×11 mm. After one compensation process, the peak-to-valley value of the transmitted wavefront error dropped from 1.9λ (λ=632.8 nm) to approximately 1/3λ, and the mid-spatial-frequency error does not become worse when the frequency of the cutting tool trajectory is controlled by use of a low-pass filter.
Analysis of Medication Error Reports
Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.
2004-11-15
In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.
Grammatical Errors Produced by English Majors: The Translation Task
ERIC Educational Resources Information Center
Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad
2011-01-01
This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…
Perceptual Bias in Speech Error Data Collection: Insights from Spanish Speech Errors
ERIC Educational Resources Information Center
Perez, Elvira; Santiago, Julio; Palma, Alfonso; O'Seaghdha, Padraig G.
2007-01-01
This paper studies the reliability and validity of naturalistic speech errors as a tool for language production research. Possible biases when collecting naturalistic speech errors are identified and specific predictions derived. These patterns are then contrasted with published reports from Germanic languages (English, German and Dutch) and one…
Perceptual Bias in Speech Error Data Collection: Insights from Spanish Speech Errors
ERIC Educational Resources Information Center
Perez, Elvira; Santiago, Julio; Palma, Alfonso; O'Seaghdha, Padraig G.
2007-01-01
This paper studies the reliability and validity of naturalistic speech errors as a tool for language production research. Possible biases when collecting naturalistic speech errors are identified and specific predictions derived. These patterns are then contrasted with published reports from Germanic languages (English, German and Dutch) and one…
Aircraft system modeling error and control error
NASA Technical Reports Server (NTRS)
Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)
2012-01-01
A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.
Diagnostic errors in interactive telepathology.
Stauch, G; Schweppe, K W; Kayser, K
2000-01-01
Telepathology (TP) as a service in pathology at a distance is now widely used. It is integrated in the daily workflow of numerous pathologists. Meanwhile, in Germany 15 departments of pathology are using the telepathology technique for frozen section service; however, a common recognised quality standard in diagnostic accuracy is still missing. In a first step, the working group Aurich uses a TP system for frozen section service in order to analyse the frequency and sources of errors in TP frozen section diagnoses for evaluating the quality of frozen section slides, the important components of image quality and their influences an diagnostic accuracy. The authors point to the necessity of an optimal training program for all participants in this service in order to reduce the risk of diagnostic errors. In addition, there is need for optimal cooperation of all partners involved in TP service.
Errors in finite-difference computations on curvilinear coordinate systems
NASA Technical Reports Server (NTRS)
Mastin, C. W.; Thompson, J. F.
1980-01-01
Curvilinear coordinate systems were used extensively to solve partial differential equations on arbitrary regions. An analysis of truncation error in the computation of derivatives revealed why numerical results may be erroneous. A more accurate method of computing derivatives is presented.
Sources of situation awareness errors in aviation.
Jones, D G; Endsley, M R
1996-06-01
Situation Awareness (SA) is a crucial factor in effective decision-making, especially in the dynamic flight environment. Consequently, an understanding of the types of SA errors that occur in this environment is beneficial. This study uses reports from the Aviation Safety Reporting System (ASRS) database (accessed by the term "situational awareness") to investigate the types of SA errors that occur in aviation. The errors were classified into one of three major categories: Level 1 (failure to correctly perceive the information), Level 2 (failure to comprehend the situation), or Level 3 (failure to project the situation into the future). Of the errors identified, 76.3% were Level 1 SA errors, 20.3% were Level 2, and 3.4% were Level 3. Level 1 SA errors occurred when relevant data were not available, when data were hard to discriminate or detect, when a failure to monitor or observe data occurred, when presented information was misperceived, or when memory loss occurred. Level 2 SA errors involved a lack of or an incomplete mental model, the use of an incorrect mental model, over-reliance on default values, and miscellaneous other factors. Level 3 errors involved either an overprojection of current trends or miscellaneous other factors. These results give an indication of the types and frequency of SA errors that occur in aviation, with failure to monitor or observe available information forming the largest single category. Many other causal factors are also indicated, however, including vigilance, automation problems, and poor mental models.
Estimating diversity via frequency ratios.
Willis, Amy; Bunge, John
2015-12-01
We wish to estimate the total number of classes in a population based on sample counts, especially in the presence of high latent diversity. Drawing on probability theory that characterizes distributions on the integers by ratios of consecutive probabilities, we construct a nonlinear regression model for the ratios of consecutive frequency counts. This allows us to predict the unobserved count and hence estimate the total diversity. We believe that this is the first approach to depart from the classical mixed Poisson model in this problem. Our method is geometrically intuitive and yields good fits to data with reasonable standard errors. It is especially well-suited to analyzing high diversity datasets derived from next-generation sequencing in microbial ecology. We demonstrate the method's performance in this context and via simulation, and we present a dataset for which our method outperforms all competitors.
Image defects from surface and alignment errors in grazing incidence telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.
1989-01-01
The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.
Image defects from surface and alignment errors in grazing incidence telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.
1989-01-01
The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.
Statistics of the residual refraction errors in laser ranging data
NASA Technical Reports Server (NTRS)
Gardner, C. S.
1977-01-01
A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.
Error estimates of numerical solutions for a cyclic plasticity problem
NASA Astrophysics Data System (ADS)
Han, W.
A cyclic plasticity problem is numerically analyzed in [13], where a sub-optimal order error estimate is shown for a spatially discrete scheme. In this note, we prove an optimal order error estimate for the spatially discrete scheme under the same solution regularity condition. We also derive an error estimate for a fully discrete scheme for solving the plasticity problem.
Everyday Memory Errors in Older Adults
Ossher, Lynn; Flegal, Kristin E.; Lustig, Cindy
2012-01-01
Despite concern about cognitive decline in old age, few studies document the types and frequency of memory errors older adults make in everyday life. In the present study, 105 healthy older adults completed the Everyday Memory Questionnaire (EMQ; Sunderland, Harris, & Baddeley, 1983), indicating what memory errors they had experienced in the last 24 hours, the Memory Self-Efficacy Questionnaire (MSEQ; West, Thorn, & Bagwell, 2003), and other neuropsychological and cognitive tasks. EMQ and MSEQ scores were unrelated and made separate contributions to variance on the Mini Mental State Exam (MMSE; Folstein, Folstein, & McHugh, 1975), suggesting separate constructs. Tip-of-the-tongue errors were the most commonly reported, and the EMQ Faces/Places and New Things subscales were most strongly related to MMSE. These findings may help training programs target memory errors commonly experienced by older adults, and suggest which types of memory errors could indicate cognitive declines of clinical concern. PMID:22694275
NASA Technical Reports Server (NTRS)
Hung, C. K.
1978-01-01
A selective repeat automatic repeat request (ARQ) system was implemented under software control in the Ground Communications Facility error detection and correction (EDC) assembly at JPL and the comm monitor and formatter (CMF) assembly at the DSSs. The CMF and EDC significantly improved real time data quality and significantly reduced the post-pass time required for replay of blocks originally received in error. Since the remote mission operation centers (RMOCs) do not provide compatible error correction equipment, error correction will not be used on the RMOC-JPL high speed data (HSD) circuits. The real time error correction capability will correct error burst or outage of two loop-times or less for each DSS-JPL HSD circuit.
NASA Technical Reports Server (NTRS)
Buechler, W.; Tucker, A. G.
1981-01-01
Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Kim, D O; Parham, K
1991-03-01
We examined a measure of discriminability in auditory nerve (AN) population responses that may underlie behavioral frequency discrimination of high-frequency pure tones in the cat. Population responses of high- (greater than = 15 spikes/s) and low- (less than 15 spikes/s) spontaneous rate (SR) AN fibers in unanesthetized decerebrate cats to 5 kHz pure tones were measured in the form of mean, mu, and standard deviation, sigma, of spike counts for 0.2 s tone bursts. The AN responses were analyzed in terms of a d'e(x, delta x) associated with adjoining cochlear places as defined in the manner of signal detection theory. We also examined sigma d'e(x, delta x), a spatial summation of the discriminability measure. The major findings are: (1) the d'e(x, delta x) function conveys information about 5 kHz pure tone frequency over a region of +/- 0.5 to 1.0 octave, or +/- 1.67 to 3.33 mm, around the characteristic place (CP), with the region being narrower at lower stimulus levels; (2) at 30 dB SPL, the integrated d'e(x, delta x) discriminability scores are similar for the apical and basal regions surrounding the CP whereas, at 70 dB SPL, the scores are higher for the apical region than for the basal region; and (3) at 50 and 70 dB SPL, the integrated d'e(x, delta x) discriminability scores of low-SR fibers were higher than those of high-SR fibers although, at 30 dB SPL, the latter were higher than the former. By using the cat cochlear frequency-place relationship and the inner hair cell (IHC) spacing, we interpret that the cat's frequency difference limen, delta f/f = 0.0088 at 4 kHz [Elliott et al., 1960, J. Acoust. Soc. Am. 32, 380-384], corresponds to a shift of cochlear excitation profile by 4.5 IHCs. From the present analysis of AN responses, we conclude that, for high-frequency pure tones, the d'e(x, delta x) code, an example of rate-place code, of frequency provides sufficient information to support the cat's behavioral frequency discrimination.
Entropic error-disturbance relations
NASA Astrophysics Data System (ADS)
Coles, Patrick; Furrer, Fabian
2014-03-01
We derive an entropic error-disturbance relation for a sequential measurement scenario as originally considered by Heisenberg, and we discuss how our relation could be tested using existing experimental setups. Our relation is valid for discrete observables, such as spin, as well as continuous observables, such as position and momentum. The novel aspect of our relation compared to earlier versions is its clear operational interpretation and the quantification of error and disturbance using entropic quantities. This directly relates the measurement uncertainty, a fundamental property of quantum mechanics, to information theoretical limitations and offers potential applications in for instance quantum cryptography. PC is funded by National Research Foundation Singapore and Ministry of Education Tier 3 Grant ``Random numbers from quantum processes'' (MOE2012-T3-1-009). FF is funded by Japan Society for the Promotion of Science, KAKENHI grant No. 24-02793.
Medication errors recovered by emergency department pharmacists.
Rothschild, Jeffrey M; Churchill, William; Erickson, Abbie; Munz, Kristin; Schuur, Jeremiah D; Salzberg, Claudia A; Lewinski, Daniel; Shane, Rita; Aazami, Roshanak; Patka, John; Jaggers, Rondell; Steffenhagen, Aaron; Rough, Steve; Bates, David W
2010-06-01
We assess the impact of emergency department (ED) pharmacists on reducing potentially harmful medication errors. We conducted this observational study in 4 academic EDs. Trained pharmacy residents observed a convenience sample of ED pharmacists' activities. The primary outcome was medication errors recovered by pharmacists, including errors intercepted before reaching the patient (near miss or potential adverse drug event), caught after reaching the patient but before causing harm (mitigated adverse drug event), or caught after some harm but before further or worsening harm (ameliorated adverse drug event). Pairs of physician and pharmacist reviewers confirmed recovered medication errors and assessed their potential for harm. Observers were unblinded and clinical outcomes were not evaluated. We conducted 226 observation sessions spanning 787 hours and observed pharmacists reviewing 17,320 medications ordered or administered to 6,471 patients. We identified 504 recovered medication errors, or 7.8 per 100 patients and 2.9 per 100 medications. Most of the recovered medication errors were intercepted potential adverse drug events (90.3%), with fewer mitigated adverse drug events (3.9%) and ameliorated adverse drug events (0.2%). The potential severities of the recovered errors were most often serious (47.8%) or significant (36.2%). The most common medication classes associated with recovered medication errors were antimicrobial agents (32.1%), central nervous system agents (16.2%), and anticoagulant and thrombolytic agents (14.1%). The most common error types were dosing errors, drug omission, and wrong frequency errors. ED pharmacists can identify and prevent potentially harmful medication errors. Controlled trials are necessary to determine the net costs and benefits of ED pharmacist staffing on safety, quality, and costs, especially important considerations for smaller EDs and pharmacy departments. Copyright (c) 2009 American College of Emergency Physicians
Simulating Bosonic Baths with Error Bars
NASA Astrophysics Data System (ADS)
Woods, M. P.; Cramer, M.; Plenio, M. B.
2015-09-01
We derive rigorous truncation-error bounds for the spin-boson model and its generalizations to arbitrary quantum systems interacting with bosonic baths. For the numerical simulation of such baths, the truncation of both the number of modes and the local Hilbert-space dimensions is necessary. We derive superexponential Lieb-Robinson-type bounds on the error when restricting the bath to finitely many modes and show how the error introduced by truncating the local Hilbert spaces may be efficiently monitored numerically. In this way we give error bounds for approximating the infinite system by a finite-dimensional one. As a consequence, numerical simulations such as the time-evolving density with orthogonal polynomials algorithm (TEDOPA) now allow for the fully certified treatment of the system-environment interaction.
Error and adjustment of reflecting prisms
NASA Astrophysics Data System (ADS)
Mao, Wenwei
1997-12-01
A manufacturing error in the orientation of the working planes of a reflecting prism, such as an angle error or an edge error, will cause the optical axis to deviate and the image to lean. So does an adjustment (position error) of a reflecting prism. A universal method to be used to calculate the optical axis deviation and the image lean caused by the manufacturing error of a reflecting prism is presented. It is suited to all types of reflecting prisms. A means to offset the position error against the manufacturing error of a reflecting prism and the changes of image orientation is discussed. For the calculation to be feasible, a surface named the 'separating surface' is introduced just in front of the real exit face of a real prism. It is the image of the entrance face formed by all reflecting surfaces of the real prism. It can be used to separate the image orientation change caused by the error of the prism's reflecting surfaces from the image orientation change caused by the error of the prism's refracting surface. Based on ray tracing, a set of simple and explicit formulas of the optical axis deviation and the image lean for a general optical wedge is derived.
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
Danielmeier, Claudia; Ullsperger, Markus
2011-01-01
When our brain detects an error, this process changes how we react on ensuing trials. People show post-error adaptations, potentially to improve their performance in the near future. At least three types of behavioral post-error adjustments have been observed. These are post-error slowing (PES), post-error reduction of interference, and post-error improvement in accuracy (PIA). Apart from these behavioral changes, post-error adaptations have also been observed on a neuronal level with functional magnetic resonance imaging and electroencephalography. Neuronal post-error adaptations comprise activity increase in task-relevant brain areas, activity decrease in distracter-encoding brain areas, activity modulations in the motor system, and mid-frontal theta power increases. Here, we review the current literature with respect to these post-error adjustments, discuss under which circumstances these adjustments can be observed, and whether the different types of adjustments are linked to each other. We also evaluate different approaches for explaining the functional role of PES. In addition, we report reanalyzed and follow-up data from a flanker task and a moving dots interference task showing (1) that PES and PIA are not necessarily correlated, (2) that PES depends on the response–stimulus interval, and (3) that PES is reliable on a within-subject level over periods as long as several months. PMID:21954390
ERIC Educational Resources Information Center
Burrows, J. K.
Research on error patterns associated with whole number computation is reviewed. Details of the results of some of the individual studies cited are given in the appendices. In Appendix A, 33 addition errors, 27 subtraction errors, 41 multiplication errors, and 41 division errors are identified, and the frequency of these errors made by 352…
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
ERIC Educational Resources Information Center
Saavedra, Pedro; Kuchak, JoAnn
An error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications was developed, based on interviews conducted with a quality control sample of 1,791 students during 1978-1979. The model was designed to identify corrective methods appropriate for different types of…
NASA Astrophysics Data System (ADS)
Liu, C. L.; Kirchengast, G.; Zhang, K. F.; Norman, R.; Li, Y.; Zhang, S. C.; Carter, B.; Fritzer, J.; Schwaerz, M.; Choy, S. L.; Wu, S. Q.; Tan, Z. X.
2013-09-01
Global Navigation Satellite System (GNSS) radio occultation (RO) is an innovative meteorological remote sensing technique for measuring atmospheric parameters such as refractivity, temperature, water vapour and pressure for the improvement of numerical weather prediction (NWP) and global climate monitoring (GCM). GNSS RO has many unique characteristics including global coverage, long-term stability of observations, as well as high accuracy and high vertical resolution of the derived atmospheric profiles. One of the main error sources in GNSS RO observations that significantly affect the accuracy of the derived atmospheric parameters in the stratosphere is the ionospheric error. In order to mitigate the effect of this error, the linear ionospheric correction approach for dual-frequency GNSS RO observations is commonly used. However, the residual ionospheric errors (RIEs) can be still significant, especially when large ionospheric disturbances occur and prevail such as during the periods of active space weather. In this study, the RIEs were investigated under different local time, propagation direction and solar activity conditions and their effects on RO bending angles are characterised using end-to-end simulations. A three-step simulation study was designed to investigate the characteristics of the RIEs through comparing the bending angles with and without the effects of the RIEs. This research forms an important step forward in improving the accuracy of the atmospheric profiles derived from the GNSS RO technique.
Temprana, E; Myslivets, E; Liu, L; Ataie, V; Wiberg, A; Kuo, B P P; Alic, N; Radic, S
2015-08-10
We demonstrate a two-fold reach extension of 16 GBaud 16-Quadrature Amplitude Modulation (QAM) wavelength division multiplexed (WDM) system based on erbium doped fiber amplifier (EDFA)-only amplified standard and single mode fiber -based link. The result is enabled by transmitter-side digital backpropagation and frequency referenced carriers drawn from a parametric comb.
Single Antenna Phase Errors for NAVSPASUR Receivers
1988-11-30
with data from the Kickapoo transmitter 3 are larger than the errors from the low-power transmitters (i.e., Gila River and Jordan Lake). Further, the...errors in the phase data associated with the Kickapoo transmitter show significant variability among data taken on different days.i We have applied a...a clear systematic bias in the derived chirp for targets illuminated by the Kickapoo transmitter. Near-field effects probably account for the larger
A spectral filter for ESMR's sidelobe errors
NASA Technical Reports Server (NTRS)
Chesters, D.
1979-01-01
Fourier analysis was used to remove periodic errors from a series of NIMBUS-5 electronically scanned microwave radiometer brightness temperatures. The observations were all taken from the midnight orbits over fixed sites in the Australian grasslands. The angular dependence of the data indicates calibration errors consisted of broad sidelobes and some miscalibration as a function of beam position. Even though an angular recalibration curve cannot be derived from the available data, the systematic errors can be removed with a spectral filter. The 7 day cycle in the drift of the orbit of NIMBUS-5, coupled to the look-angle biases, produces an error pattern with peaks in its power spectrum at the weekly harmonics. About plus or minus 4 K of error is removed by simply blocking the variations near two- and three-cycles-per-week.
Frequency domain measurement systems
NASA Technical Reports Server (NTRS)
Eischer, M. C.
1978-01-01
Stable frequency sources and signal processing blocks were characterized by their noise spectra, both discrete and random, in the frequency domain. Conventional measures are outlined, and systems for performing the measurements are described. Broad coverage of system configurations which were found useful is given. Their functioning and areas of application are discussed briefly. Particular attention is given to some of the potential error sources in the measurement procedures, system configurations, double-balanced-mixer-phase-detectors, and application of measuring instruments.
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
Relationships between GPS-signal propagation errors and EISCAT observations
NASA Astrophysics Data System (ADS)
Jakowski, N.; Sardon, E.; Engler, E.; Jungstand, A.; Klähn, D.
1996-12-01
When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS) are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC) along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS), the TEC over Europe is estimated within the geographic ranges -20°leq
Approximate Minimum Bit Error Rate Equalization for Fading Channels
NASA Astrophysics Data System (ADS)
Kovacs, Lorant; Levendovszky, Janos; Olah, Andras; Treplan, Gergely
2010-12-01
A novel channel equalizer algorithm is introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithm is based on minimizing the bit error rate (BER) using a fast approximation of its gradient with respect to the equalizer coefficients. This approximation is obtained by estimating the exponential summation in the gradient with only some carefully chosen dominant terms. The paper derives an algorithm to calculate these dominant terms in real-time. Summing only these dominant terms provides a highly accurate approximation of the true gradient. Combined with a fast adaptive channel state estimator, the new equalization algorithm yields better performance than the traditional zero forcing (ZF) or minimum mean square error (MMSE) equalizers. The performance of the new method is tested by simulations performed on standard wireless channels. From the performance analysis one can infer that the new equalizer is capable of efficient channel equalization and maintaining a relatively low bit error probability in the case of channels corrupted by frequency selectivity. Hence, the new algorithm can contribute to ensuring QoS communication over highly distorted channels.
NASA Astrophysics Data System (ADS)
Jia, Mei-Hui; Wang, Cheng-Lin; Ren, Bin
2017-07-01
Stress, strain and vibration characteristics of rotor parts should be changed significantly under high acceleration, manufacturing error is one of the most important reason. However, current research on this problem has not been carried out. A rotor with an acceleration of 150,000 g is considered as the objective, the effects of manufacturing errors on rotor mechanical properties and dynamic characteristics are executed by the selection of the key affecting factors. Through the force balance equation of the rotor infinitesimal unit establishment, a theoretical model of stress calculation based on slice method is proposed and established, a formula for the rotor stress at any point derives. A finite element model (FEM) of rotor with holes is established with manufacturing errors. The changes of the stresses and strains of a rotor in parallelism and symmetry errors are analyzed, which verify the validity of the theoretical model. The pre-stressing modal analysis is performed based on the aforementioned static analysis. The key dynamic characteristics are analyzed. The results demonstrated that, as the parallelism and symmetry errors increase, the equivalent stresses and strains of the rotor slowly increase linearly, the highest growth rate does not exceed 4%, the maximum change rate of natural frequency is 0.1%. The rotor vibration mode is not significantly affected. The FEM construction method of the rotor with manufacturing errors can be utilized for the quantitative research on rotor characteristics, which will assist in the active control of rotor component reliability under high acceleration.
Twenty Questions about Student Errors.
ERIC Educational Resources Information Center
Fisher, Kathleen M.; Lipson, Joseph Isaac
1986-01-01
Discusses the value of studying errors made by students in the process of learning science. Addresses 20 research questions dealing with student learning errors. Attempts to characterize errors made by students and clarify some terms used in error research. (TW)
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
ERIC Educational Resources Information Center
Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.
2010-01-01
Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…
Dandona, R.; Dandona, L.
2001-01-01
Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669
ERIC Educational Resources Information Center
Richmond, Kent C.
Students of English as a second language (ESL) often come to the classroom with little or no experience in writing in any language and with inaccurate assumptions about writing. Rather than correct these assumptions, teachers often seem to unwittingly reinforce them, actually inducing errors into their students' work. Teacher-induced errors occur…
ERIC Educational Resources Information Center
Metcalfe, Janet
2017-01-01
Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…
Quantum error correction of observables
Beny, Cedric; Kempf, Achim; Kribs, David W.
2007-10-15
A formalism for quantum error correction based on operator algebras was introduced by us earlier [Phys. Rev. Lett. 98, 10052 (2007)] via consideration of the Heisenberg picture for quantum dynamics. The resulting theory allows for the correction of hybrid quantum-classical information and does not require an encoded state to be entirely in one of the corresponding subspaces or subsystems. Here, we provide detailed proofs for our earlier results, derive more results, and elucidate key points with expanded discussions. We also present several examples and indicate how the theory can be extended to operator spaces and general positive operator-valued measures.
Burnout and medical errors among American surgeons.
Shanafelt, Tait D; Balch, Charles M; Bechamps, Gerald; Russell, Tom; Dyrbye, Lotte; Satele, Daniel; Collicott, Paul; Novotny, Paul J; Sloan, Jeff; Freischlag, Julie
2010-06-01
To evaluate the relationship between burnout and perceived major medical errors among American surgeons. Despite efforts to improve patient safety, medical errors by physicians remain a common cause of morbidity and mortality. Members of the American College of Surgeons were sent an anonymous, cross-sectional survey in June 2008. The survey included self-assessment of major medical errors, a validated depression screening tool, and standardized assessments of burnout and quality of life (QOL). Of 7905 participating surgeons, 700 (8.9%) reported concern they had made a major medical error in the last 3 months. Over 70% of surgeons attributed the error to individual rather than system level factors. Reporting an error during the last 3 months had a large, statistically significant adverse relationship with mental QOL, all 3 domains of burnout (emotional exhaustion, depersonalization, and personal accomplishment) and symptoms of depression. Each one point increase in depersonalization (scale range, 0-33) was associated with an 11% increase in the likelihood of reporting an error while each one point increase in emotional exhaustion (scale range, 0-54) was associated with a 5% increase. Burnout and depression remained independent predictors of reporting a recent major medical error on multivariate analysis that controlled for other personal and professional factors. The frequency of overnight call, practice setting, method of compensation, and number of hours worked were not associated with errors on multivariate analysis. Major medical errors reported by surgeons are strongly related to a surgeon's degree of burnout and their mental QOL. Studies are needed to determine how to reduce surgeon distress and how to support surgeons when medical errors occur.
Chan, Adeline; Yan, Jun; Csurhes, Peter; Greer, Judith; McCombe, Pamela
2015-09-15
The aim of this study was to measure the levels of circulating BDNF and the frequency of BDNF-producing T cells after acute ischaemic stroke. Serum BDNF levels were measured by ELISA. Flow cytometry was used to enumerate peripheral blood leukocytes that were labelled with antibodies against markers of T cells, T regulatory cells (Tregs), and intracellular BDNF. There was a slight increase in serum BDNF levels after stroke. There was no overall difference between stroke patients and controls in the frequency of CD4(+) and CD8(+) BDNF(+) cells, although a subgroup of stroke patients showed high frequencies of these cells. However, there was an increase in the percentage of BDNF(+) Treg cells in the CD4(+) population in stroke patients compared to controls. Patients with high percentages of CD4(+) BDNF(+) Treg cells had a better outcome at 6months than those with lower levels. These groups did not differ in age, gender or initial stroke severity. Enhancement of BDNF production after stroke could be a useful means of improving neuroprotection and recovery after stroke.
Tsvetkov, D.Y.
1983-01-01
Estimates of the frequency of type I and II supernovae occurring in galaxies of different types are derived from observational material acquired by the supernova patrol of the Shternberg Astronomical Institute.
Hosein, Mervyn; Mohiuddin, Sidra; Fatima, Nazish
2015-01-01
Background: Oral submucous fibrosis (OSMF) is a chronic, premalignant condition of the oral mucosa and one of the commonest potentially malignant disorders amongst the Asian population. The objective of this study was to investigate the association of etiologic factors with: age, frequency, duration of consumption of areca nut and its derivatives, and the severity of clinical manifestations. Methods: A cross-sectional, multi centric study was conducted over 8 years on clinically diagnosed OSMF cases (n = 765) from both public and private tertiary care centers. Sample size was determined by World Health Organization sample size calculator. Consumption of areca nut in different forms, frequency of daily usage, years of chewing, degree of mouth opening and duration of the condition were recorded. Level of significance was kept at P ≤ 0.05. Results: A total of 765 patients of OSMF were examined, of whom 396 (51.8%) were male and 369 (48.2%) female with a mean age of 29.17 years. Mild OSMF was seen in 61 cases (8.0%), moderate OSMF in 353 (46.1%) and severe OSMF in 417 (54.5%) subjects. Areca nut and other derivatives were most frequently consumed and showed significant risk in the severity of OSMF (P ≤ 0.0001). Age of the sample and duration of chewing years were also significant (P = 0.012). Conclusions: The relative risk of OSMF increased with duration and frequency of areca nut consumption especially from an early age of onset. PMID:26473161
Multipath induced errors in meteorological Doppler/interferometer location systems
NASA Technical Reports Server (NTRS)
Wallace, R. G.
1984-01-01
One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.
NASA Astrophysics Data System (ADS)
Takemura, S.; Furumura, T.
2010-12-01
In order to understand distribution properties of small-scale heterogeneities in the crust and upper mantle structure, we analyze three-component seismograms recorded by Hi-net in Japan. We examined relative strength of the P-wave in the transverse (T) component and its change as a function of frequency and propagation distances, which is strongly relating to the strength of seismic wave scattering in the lithosphere. We analyzed 53,220 Hi-net record from 310 shallow (h<30km) crustal earthquakes with MJMA =2.0-5.3. The three-component seismograms are firstly applied by band-pass filter with pass band frequency of f=1-2, 2-4, 4-8, 8-16, 16-32 Hz and then the Hilbert transform is used to synthesize envelope of each component. Then, the energy partition (EP) of P wave in the T component relative to total P-wave energy is evaluated around the P wave in 3-sec time window. The estimated EP value is almost constant 0.2 in high-frequencies (8-16 Hz) at shorter distance, while it is 0.07 in low-frequencies (1-2 Hz). We found clearly frequency-change property of EP value. But at larger distance over 150 km, EP values gradually increase with increasing distance. In high-frequencies (8-16, 16-32 Hz), especially EP values asymptotically reach from 0.2 to 0.33, equi-partitioning of P-wave energy into three components. This may because Pn-phase dominates in larger hypocentral distances. In order to examine difference in the EP in each area of Japan which would be relating to the strength of crustal heterogeneities in each area we divided the area of Japan into three regions, fore-arc side of Tohoku, back-arc side of Tohoku and Chugoku-Shikoku area. The difference in EP value in each area is clearly found in the high-frequency (4-8 Hz) band, where larger EP (0.2) was obtained at back-arc side of Tohoku relative to smaller EP (0.1) at fore-arc side of Tohoku and Chugoku-Shikoku. This is consistent with the results of Carcole and Sato (2009) who estimated the strength of crustal
Multielevation calibration of frequency-domain electromagnetic data
Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.
2014-01-01
Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.
Reduction of Maintenance Error Through Focused Interventions
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)
1997-01-01
It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
Reduction of Maintenance Error Through Focused Interventions
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)
1997-01-01
It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755
System contributions to error.
Adams, J G; Bohan, J S
2000-11-01
An unacceptably high rate of medical error occurs in the emergency department (ED). Professional accountability requires that EDs be managed to systematically eliminate error. This requires advocacy and leadership at every level of the specialty and at each institution in order to be effective and sustainable. At the same time, the significant operational challenges that face the ED, such as excessive patient care requirements, should be recognized if error reduction efforts are to remain credible. Proper staffing levels, for example, are an important prerequisite if medical error is to be minimized. Even at times of low volume, however, medical error is probably common. Engineering human factors and operational procedures, promoting team coordination, and standardizing care processes can decrease error and are strongly promoted. Such efforts should be coupled to systematic analysis of errors that occur. Reliable reporting is likely only if the system is based within the specialty to help ensure proper analysis and decrease threat. Ultimate success will require dedicated effort, continued advocacy, and promotion of research.
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Refractive errors and schizophrenia.
Caspi, Asaf; Vishne, Tali; Reichenberg, Abraham; Weiser, Mark; Dishon, Ayelet; Lubin, Gadi; Shmushkevitz, Motti; Mandel, Yossi; Noy, Shlomo; Davidson, Michael
2009-02-01
Refractive errors (myopia, hyperopia and amblyopia), like schizophrenia, have a strong genetic cause, and dopamine has been proposed as a potential mediator in their pathophysiology. The present study explored the association between refractive errors in adolescence and schizophrenia, and the potential familiality of this association. The Israeli Draft Board carries a mandatory standardized visual accuracy assessment. 678,674 males consecutively assessed by the Draft Board and found to be psychiatrically healthy at age 17 were followed for psychiatric hospitalization with schizophrenia using the Israeli National Psychiatric Hospitalization Case Registry. Sib-ships were also identified within the cohort. There was a negative association between refractive errors and later hospitalization for schizophrenia. Future male schizophrenia patients were two times less likely to have refractive errors compared with never-hospitalized individuals, controlling for intelligence, years of education and socioeconomic status [adjusted Hazard Ratio=.55; 95% confidence interval .35-.85]. The non-schizophrenic male siblings of schizophrenia patients also had lower prevalence of refractive errors compared to never-hospitalized individuals. Presence of refractive errors in adolescence is related to lower risk for schizophrenia. The familiality of this association suggests that refractive errors may be associated with the genetic liability to schizophrenia.
The relationship between rate of venous sampling and visible frequency of hormone pulses.
De Nicolao, G; Guardabasso, V; Rocchetti, M
1990-11-01
In this paper, a stochastic model of episodic hormone secretion is used to quantify the effect of the sampling rate on the frequency of pulses that can be detected by objective computer methods in time series of plasma hormone concentrations. Occurrence times of secretion pulses are modeled as recurrent events, with interpulse intervals described by Erlang distributions. In this way, a variety of secretion patterns, ranging from Poisson events to periodic pulses, can be studied. The notion of visible and invisible pulses is introduced and the relationship between true pulses frequency and mean visible pulse frequency is analytically derived. It is shown that a given visible pulse frequency can correspond to two distinct true frequencies. In order to compensate for the 'invisibility error', an algorithm based on the analysis of the original series and its undersampled subsets is proposed and the derived computer program is tested on simulated and clinical data.
NASA Technical Reports Server (NTRS)
1987-01-01
In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.
Mazhar, Faizan; Akram, Shahzad; Al-Osaimi, Yousif A; Haider, Nafis
2017-01-01
Medication reconciliation is a major component of safe patient care. One of the main problems in the implementation of a medication reconciliation process is the lack of human resources. With limited resources, it is better to target medication reconciliation resources to patients who will derive the most benefit from it. The primary objective of this study was to determine the frequency and types of medication reconciliation errors identified by pharmacists performing medication reconciliation at admission. Each medication error was rated for its potential to cause patient harm during hospitalization. A secondary objective was to determine risk factors associated with medication reconciliation errors. This was a prospective, single-center pilot study conducted in the internal medicine and surgical wards of a tertiary care teaching hospital in the Eastern province of Saudi Arabia. A clinical pharmacist took the best possible medication history of patients admitted to medical and surgical services and compared with the medication orders at hospital admission; any identified discrepancies were noted and analyzed for reconciliation errors. Multivariate logistic regression was performed to determine the risk factors related to reconciliation errors. A total of 328 patients (138 in surgical and 198 in medical) were included in the study. For the 1419 medications recorded, 1091 discrepancies were discovered out of which 491 (41.6%) were reconciliation errors. The errors affected 177 patients (54%). The incidence of reconciliation errors in the medical patient group was 25.1% and 32.0% in the surgical group (p<0.001). In both groups, the most frequent reconciliation error was the omission (43.5% and 51.2%). Lipid-lowering (12.4%) and antihypertensive agents were most commonly involved. If undetected, 43.6% of order errors were rated as potentially requiring increased monitoring or intervention to preclude harm; 17.7% were rated as potentially harmful. A multivariate
NASA Astrophysics Data System (ADS)
Xiao, Zhili; Tan, Chao; Dong, Feng
2017-08-01
Magnetic induction tomography (MIT) is a promising technique for continuous monitoring of intracranial hemorrhage due to its contactless nature, low cost and capacity to penetrate the high-resistivity skull. The inter-tissue inductive coupling increases with frequency, which may lead to errors in multi-frequency imaging at high frequency. The effect of inter-tissue inductive coupling was investigated to improve the multi-frequency imaging of hemorrhage. An analytical model of inter-tissue inductive coupling based on the equivalent circuit was established. A set of new multi-frequency decomposition equations separating the phase shift of hemorrhage from other brain tissues was derived by employing the coupling information to improve the multi-frequency imaging of intracranial hemorrhage. The decomposition error and imaging error are both decreased after considering the inter-tissue inductive coupling information. The study reveals that the introduction of inter-tissue inductive coupling can reduce the errors of multi-frequency imaging, promoting the development of intracranial hemorrhage monitoring by multi-frequency MIT.
Error Detection Processes during Observational Learning
ERIC Educational Resources Information Center
Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.
2006-01-01
The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…
Error Detection Processes during Observational Learning
ERIC Educational Resources Information Center
Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.
2006-01-01
The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…
Verb-Form Errors in EAP Writing
ERIC Educational Resources Information Center
Wee, Roselind; Sim, Jacqueline; Jusoff, Kamaruzaman
2010-01-01
This study was conducted to identify and describe the written verb-form errors found in the EAP writing of 39 second year learners pursuing a three-year Diploma Programme from a public university in Malaysia. Data for this study, which were collected from a written 350-word discursive essay, were analyzed to determine the types and frequency of…
A synchronization technique for generalized frequency division multiplexing
NASA Astrophysics Data System (ADS)
Gaspar, Ivan S.; Mendes, Luciano L.; Michailow, Nicola; Fettweis, Gerhard
2014-12-01
Generalized frequency division multiplexing (GFDM) is a block filtered multicarrier modulation scheme recently proposed for future wireless communication systems. It generalizes the concept of orthogonal frequency division multiplexing (OFDM), featuring multiple circularly pulse-shaped subsymbols per subcarrier. This paper presents an algorithm for GFDM synchronization and investigates the use of a preamble that consists of two identical parts combined with a windowing process in order to satisfy low out of band radiation requirements. The performance of time and frequency estimation, with and without windowing, is evaluated in terms of the statistical properties of residual offsets and the impact on symbol error rate over frequency-selective channels. A flexible metric that quantifies the penalty of misalignments is derived. The results show that this approach performs practically as state-of-the-art OFDM schemes known in the literature, while it additionally can reduce the sidelobes of the spectrum emission.
Simulation of probability distributions commonly used in hydrological frequency analysis
NASA Astrophysics Data System (ADS)
Cheng, Ke-Sheng; Chiang, Jie-Lun; Hsu, Chieh-Wei
2007-01-01
Random variable simulation has been applied to many applications in hydrological modelling, flood risk analysis, environmental impact assessment, etc. However, computer codes for simulation of distributions commonly used in hydrological frequency analysis are not available in most software libraries. This paper presents a frequency-factor-based method for random number generation of five distributions (normal, log-normal, extreme-value type I, Pearson type III and log-Pearson type III) commonly used in hydrological frequency analysis. The proposed method is shown to produce random numbers of desired distributions through three means of validation: (1) graphical comparison of cumulative distribution functions (CDFs) and empirical CDFs derived from generated data; (2) properties of estimated parameters; (3) type I error of goodness-of-fit test. An advantage of the method is that it does not require CDF inversion, and frequency factors of the five commonly used distributions involves only the standard normal deviate. Copyright
Ota, Satoshi; Kitaguchi, Ryoichi; Takeda, Ryoji; Yamada, Tsutomu; Takemura, Yasushi
2016-09-10
The dependence of magnetic relaxation on particle parameters, such as the size and anisotropy, has been conventionally discussed. In addition, the influences of external conditions, such as the intensity and frequency of the applied field, the surrounding viscosity, and the temperature on the magnetic relaxation have been researched. According to one of the basic theories regarding magnetic relaxation, the faster type of relaxation dominates the process. However, in this study, we reveal that Brownian and Néel relaxations coexist and that Brownian relaxation can occur after Néel relaxation despite having a longer relaxation time. To understand the mechanisms of Brownian rotation, alternating current (AC) hysteresis loops were measured in magnetic fluids of different viscosities. These loops conveyed the amplitude and phase delay of the magnetization. In addition, the intrinsic loss power (ILP) was calculated using the area of the AC hysteresis loops. The ILP also showed the magnetization response regarding the magnetic relaxation over a wide frequency range. To develop biomedical applications of magnetic nanoparticles, such as hyperthermia and magnetic particle imaging, it is necessary to understand the mechanisms of magnetic relaxation.
Ota, Satoshi; Kitaguchi, Ryoichi; Takeda, Ryoji; Yamada, Tsutomu; Takemura, Yasushi
2016-01-01
The dependence of magnetic relaxation on particle parameters, such as the size and anisotropy, has been conventionally discussed. In addition, the influences of external conditions, such as the intensity and frequency of the applied field, the surrounding viscosity, and the temperature on the magnetic relaxation have been researched. According to one of the basic theories regarding magnetic relaxation, the faster type of relaxation dominates the process. However, in this study, we reveal that Brownian and Néel relaxations coexist and that Brownian relaxation can occur after Néel relaxation despite having a longer relaxation time. To understand the mechanisms of Brownian rotation, alternating current (AC) hysteresis loops were measured in magnetic fluids of different viscosities. These loops conveyed the amplitude and phase delay of the magnetization. In addition, the intrinsic loss power (ILP) was calculated using the area of the AC hysteresis loops. The ILP also showed the magnetization response regarding the magnetic relaxation over a wide frequency range. To develop biomedical applications of magnetic nanoparticles, such as hyperthermia and magnetic particle imaging, it is necessary to understand the mechanisms of magnetic relaxation. PMID:28335297
Fallin, Daniele; Schork, Nicholas J.
2000-01-01
Haplotype analyses have become increasingly common in genetic studies of human disease because of their ability to identify unique chromosomal segments likely to harbor disease-predisposing genes. The study of haplotypes is also used to investigate many population processes, such as migration and immigration rates, linkage-disequilibrium strength, and the relatedness of populations. Unfortunately, many haplotype-analysis methods require phase information that can be difficult to obtain from samples of nonhaploid species. There are, however, strategies for estimating haplotype frequencies from unphased diploid genotype data collected on a sample of individuals that make use of the expectation-maximization (EM) algorithm to overcome the missing phase information. The accuracy of such strategies, compared with other phase-determination methods, must be assessed before their use can be advocated. In this study, we consider and explore sources of error between EM-derived haplotype frequency estimates and their population parameters, noting that much of this error is due to sampling error, which is inherent in all studies, even when phase can be determined. In light of this, we focus on the additional error between haplotype frequencies within a sample data set and EM-derived haplotype frequency estimates incurred by the estimation procedure. We assess the accuracy of haplotype frequency estimation as a function of a number of factors, including sample size, number of loci studied, allele frequencies, and locus-specific allelic departures from Hardy-Weinberg and linkage equilibrium. We point out the relative impacts of sampling error and estimation error, calling attention to the pronounced accuracy of EM estimates once sampling error has been accounted for. We also suggest that many factors that may influence accuracy can be assessed empirically within a data set—a fact that can be used to create “diagnostics” that a user can turn to for assessing potential
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
NASA Astrophysics Data System (ADS)
Rees, P. C. T.; Chipperfield, A. J.; Draper, P. W.
This document describes the Error Message Service, EMS, and its use in system software. The purpose of EMS is to provide facilities for constructing and storing error messages for future delivery to the user -- usually via the Starlink Error Reporting System, ERR (see SUN/104). EMS can be regarded as a simplified version of ERR without the binding to any software environment (e.g., for message output or access to the parameter and data systems). The routines in this library conform to the error reporting conventions described in SUN/104. A knowledge of these conventions, and of the ADAM system (see SG/4), is assumed in what follows. This document is intended for Starlink systems programmers and can safely be ignored by applications programmers and users.
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and
More systematic errors in the measurement of power spectral density
NASA Astrophysics Data System (ADS)
Mack, Chris A.
2015-07-01
Power spectral density (PSD) analysis is an important part of understanding line-edge and linewidth roughness in lithography. But uncertainty in the measured PSD, both random and systematic, complicates interpretation. It is essential to understand and quantify the sources of the measured PSD's uncertainty and to develop mitigation strategies. Both analytical derivations and simulations of rough features are used to evaluate data window functions for reducing spectral leakage and to understand the impact of data detrending on biases in PSD, autocovariance function (ACF), and height-to-height covariance function measurement. A generalized Welch window was found to be best among the windows tested. Linear detrending for line-edge roughness measurement results in underestimation of the low-frequency PSD and errors in the ACF and height-to-height covariance function. Measuring multiple edges per scanning electron microscope image reduces this detrending bias.
NASA Astrophysics Data System (ADS)
von Clarmann, T.