Science.gov

Sample records for frequency derived error

  1. Impacts of frequency increment errors on frequency diverse array beampattern

    NASA Astrophysics Data System (ADS)

    Gao, Kuandong; Chen, Hui; Shao, Huaizong; Cai, Jingye; Wang, Wen-Qin

    2015-12-01

    Different from conventional phased array, which provides only angle-dependent beampattern, frequency diverse array (FDA) employs a small frequency increment across the antenna elements and thus results in a range angle-dependent beampattern. However, due to imperfect electronic devices, it is difficult to ensure accurate frequency increments, and consequently, the array performance will be degraded by unavoidable frequency increment errors. In this paper, we investigate the impacts of frequency increment errors on FDA beampattern. We derive the beampattern errors caused by deterministic frequency increment errors. For stochastic frequency increment errors, the corresponding upper and lower bounds of FDA beampattern error are derived. They are verified by numerical results. Furthermore, the statistical characteristics of FDA beampattern with random frequency increment errors, which obey Gaussian distribution and uniform distribution, are also investigated.

  2. Frequency of pediatric medication administration errors and contributing factors.

    PubMed

    Ozkan, Suzan; Kocaman, Gulseren; Ozturk, Candan; Seren, Seyda

    2011-01-01

    This study examined the frequency of pediatric medication administration errors and contributing factors. This research used the undisguised observation method and Critical Incident Technique. Errors and contributing factors were classified through the Organizational Accident Model. Errors were made in 36.5% of the 2344 doses that were observed. The most frequent errors were those associated with administration at the wrong time. According to the results of this study, errors arise from problems within the system.

  3. The Relative Frequency of Spanish Pronunciation Errors.

    ERIC Educational Resources Information Center

    Hammerly, Hector

    Types of hierarchies of pronunciation difficulty are discussed, and a hierarchy based on contrastive analysis plus informal observation is proposed. This hierarchy is less one of initial difficulty than of error persistence. One feature of this hierarchy is that, because of lesser learner awareness and very limited functional load, errors…

  4. Assessment of Errors in AMSR-E Derived Soil Moisture

    NASA Astrophysics Data System (ADS)

    Champagne, C.; McNairn, H.; Berg, A.; de Jeu, R. A.

    2009-05-01

    Soil moisture derived from passive microwave satellites provides information at a coarse spatial scale, but with temporally frequent, global coverage that can be used for monitoring applications over agricultural regions. Passive microwave satellites measure surface brightness temperature, which is largely a function of vegetation water content (which is directly related to the vegetation optical depth), surface temperature and surface soil moisture at low frequencies. Retrieval algorithms for global soil moisture data sets by necessity require limited site-specific information to derive these parameters, and as such may show variations in local accuracy. The objective of this study is to examine the errors in passive microwave soil moisture data over agricultural sites in Canada to provide guidelines on data quality assessment for using these data sets in monitoring applications. Global gridded soil moisture was acquired from the AMSR-E satellite using the Land Parameter Retrieval Model, LPRM (Owe et al., 2008). The LPRM model derives surface soil moisture through an iterative optimization procedure using a polarization difference index to estimate vegetation optical depth and surface dielectric constant using frequencies at 6.9 and 10.7 GHz. The LPRM model requires no a-priori information on surface conditions, but retrieval errors are expected to increase as the amount of open water and dense vegetation within each pixel increases (Owe et al., 2008) Satellite-derived LPRM soil moisture values were used to assess changes in soil moisture retrieval accuracy over the 2007 growing season for a largely agricultural site near Guelph (Ontario), Canada. Accuracy was determined by validating LPRM soil moisture against a network of 16 in-situ monitoring sites distributed at the pixel scale for AMSR-E. Changes in squared error, and pairwise correlation coefficient between satellite and in-situ surface soil moisture were assessed against changes in satellite orbit and

  5. Antenna pointing systematic error model derivations

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.; Lansing, F. L.; Riggs, R.

    1987-01-01

    The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

  6. Digital frequency error detectors for OQPSK satellite modems

    NASA Astrophysics Data System (ADS)

    Ahmad, J.; Jeans, T. G.; Evans, B. G.

    1991-09-01

    Two algorithms for frequency error detection in OQPSK satellite modems are presented. The results of computer simulations in respect of acquisition and noise performance are given. These algorithms are suitable for DSP implementation and are applicable to mobile satellite systems in which significant Doppler shift is experienced.

  7. Estimate error of frequency-dependent Q introduced by linear regression and its nonlinear implementation

    NASA Astrophysics Data System (ADS)

    Li, Guofa; Huang, Wei; Zheng, Hao; Zhang, Baoqing

    2016-02-01

    The spectral ratio method (SRM) is widely used to estimate quality factor Q via the linear regression of seismic attenuation under the assumption of a constant Q. However, the estimate error will be introduced when this assumption is violated. For the frequency-dependent Q described by a power-law function, we derived the analytical expression of estimate error as a function of the power-law exponent γ and the ratio of the bandwidth to the central frequency σ . Based on the theoretical analysis, we found that the estimate errors are mainly dominated by the exponent γ , and less affected by the ratio σ . This phenomenon implies that the accuracy of the Q estimate can hardly be improved by adjusting the width and range of the frequency band. Hence, we proposed a two-parameter regression method to estimate the frequency-dependent Q from the nonlinear seismic attenuation. The proposed method was tested using the direct waves acquired by a near-surface cross-hole survey, and its reliability was evaluated in comparison with the result of SRM.

  8. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  9. The second-order Rytov approximation and residual error in dual-frequency satellite navigation systems

    NASA Astrophysics Data System (ADS)

    Kim, B. C.; Tinin, M. V.

    The second-order Rytov approximation has been used to determine ionospheric corrections for the phase path up to third order. We show the transition of the derived expressions to previous results obtained within the ray approximation using the second-order approximation of perturbation theory by solving the eikonal equation. The resulting equation for the phase path is used to determine the residual ionospheric first-, second- and third-order errors of a dual-frequency navigation system, with diffraction effects taken into account. Formulas are derived for the biases and variances of these errors, and these formulas are analyzed and modeled for a turbulent ionosphere. The modeling results show that the third-order error that is determined by random irregularities can be dominant in the residual errors. In particular, the role of random irregularities is enhanced for small elevation angles. Furthermore, in the case of small angles the role of diffraction effects increases. It is pointed out that a need to pass on to diffraction formulas arises when the Fresnel radius exceeds the inner scale of turbulence.

  10. Compensation of body shake errors in terahertz beam scanning single frequency holography for standoff personnel screening

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Chao; Sun, Zhao-Yang; Zhao, Yu; Wu, Shi-You; Fang, Guang-You

    2016-08-01

    In the terahertz (THz) band, the inherent shake of the human body may strongly impair the image quality of a beam scanning single frequency holography system for personnel screening. To realize accurate shake compensation in imaging processing, it is quite necessary to develop a high-precision measure system. However, in many cases, different parts of a human body may shake to different extents, resulting in greatly increasing the difficulty in conducting a reasonable measurement of body shake errors for image reconstruction. In this paper, a body shake error compensation algorithm based on the raw data is proposed. To analyze the effect of the body shake on the raw data, a model of echoed signal is rebuilt with considering both the beam scanning mode and the body shake. According to the rebuilt signal model, we derive the body shake error estimated method to compensate for the phase error. Simulation on the reconstruction of point targets with shake errors and proof-of-principle experiments on the human body in the 0.2-THz band are both performed to confirm the effectiveness of the body shake compensation algorithm proposed. Project supported by the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant No. YYYJ-1123).

  11. PULSAR TIMING ERRORS FROM ASYNCHRONOUS MULTI-FREQUENCY SAMPLING OF DISPERSION MEASURE VARIATIONS

    SciTech Connect

    Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Dolch, T.

    2015-03-10

    Free electrons in the interstellar medium cause frequency-dependent delays in pulse arrival times due to both scattering and dispersion. Multi-frequency measurements are used to estimate and remove dispersion delays. In this paper, we focus on the effect of any non-simultaneity of multi-frequency observations on dispersive delay estimation and removal. Interstellar density variations combined with changes in the line of sight from pulsar and observer motions cause dispersion measure (DM) variations with an approximately power-law power spectrum, augmented in some cases by linear trends. We simulate time series, estimate the magnitude and statistical properties of timing errors that result from non-simultaneous observations, and derive prescriptions for data acquisition that are needed in order to achieve a specified timing precision. For nearby, highly stable pulsars, measurements need to be simultaneous to within about one day in order for the timing error from asynchronous DM correction to be less than about 10 ns. We discuss how timing precision improves when increasing the number of dual-frequency observations used in DM estimation for a given epoch. For a Kolmogorov wavenumber spectrum, we find about a factor of two improvement in precision timing when increasing from two to three observations but diminishing returns thereafter.

  12. Single trial time-frequency domain analysis of error processing in post-traumatic stress disorder.

    PubMed

    Clemans, Zachary A; El-Baz, Ayman S; Hollifield, Michael; Sokhadze, Estate M

    2012-09-13

    Error processing studies in psychology and psychiatry are relatively common. Event-related potentials (ERPs) are often used as measures of error processing, two such response-locked ERPs being the error-related negativity (ERN) and the error-related positivity (Pe). The ERN and Pe occur following committed error in reaction time tasks as low frequency (4-8 Hz) electroencephalographic (EEG) oscillations registered at the midline fronto-central sites. We created an alternative method for analyzing error processing using time-frequency analysis in the form of a wavelet transform. A study was conducted in which subjects with PTSD and healthy control completed a forced-choice task. Single trial EEG data from errors in the task were processed using a continuous wavelet transform. Coefficients from the transform that corresponded to the theta range were averaged to isolate a theta waveform in the time-frequency domain. Measures called the time-frequency ERN and Pe were obtained from these waveforms for five different channels and then averaged to obtain a single time-frequency ERN and Pe for each error trial. A comparison of the amplitude and latency for the time-frequency ERN and Pe between the PTSD and control group was performed. A significant group effect was found on the amplitude of both measures. These results indicate that the developed single trial time-frequency error analysis method is suitable for examining error processing in PTSD and possibly other psychiatric disorders.

  13. Method for measuring the phase error distribution of a wideband arrayed waveguide grating in the frequency domain.

    PubMed

    Takada, Kazumasa; Satoh, Shin-ichi

    2006-02-01

    We describe a method for measuring the phase error distribution of an arrayed waveguide grating (AWG) in the frequency domain when the free spectral range (FSR) of the AWG is so wide that it cannot be covered by one tunable laser source. Our method is to sweep the light frequency in the neighborhoods of two successive peaks in the AWG transmission spectrum by using two laser sources with different tuning ranges. The method was confirmed experimentally by applying it to a 160 GHz spaced AWG with a FSR of 11 THz. The variations in the derived phase error data were very small at +/-0.02 rad around the central arrayed waveguides.

  14. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass

    PubMed Central

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-01-01

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass. PMID:27886153

  15. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.

    PubMed

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-11-24

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.

  16. EEG error potentials detection and classification using time-frequency features for robot reinforcement learning.

    PubMed

    Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali

    2015-08-01

    In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.

  17. Research on controlling middle spatial frequency error of high gradient precise aspheric by pitch tool

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan; Zhong, Xianyun

    2016-09-01

    Extreme optical fabrication projects known as EUV and X-ray optic systems, which are representative of today's advanced optical manufacturing technology level, have special requirements for the optical surface quality. In synchroton radiation (SR) beamlines, mirrors of high shape accuracy is always used in grazing incidence. In nanolithograph systems, middle spatial frequency errors always lead to small-angle scattering or flare that reduces the contrast of the image. The slope error is defined for a given horizontal length, the increase or decrease in form error at the end point relative to the starting point is measured. The quality of reflective optical elements can be described by their deviation from ideal shape at different spatial frequencies. Usually one distinguishes between the figure error, the low spatial error part ranging from aperture length to 1mm frequencies, and the mid-high spatial error part from 1mm to 1 μm and from1 μm to some 10 nm spatial frequencies, respectively. Firstly, this paper will disscuss the relationship between slope error and middle spatial frequency error, which both describe the optical surface error along with the form profile. Then, experimental researches will be conducted on a high gradient precise aspheric with pitch tool, which aim to restraining the middle spatial frequency error.

  18. Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.

    PubMed

    Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan

    2015-01-01

    Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars.

  19. To Err is Normable: The Computation of Frequency-Domain Error Bounds from Time-Domain Data

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard

    1998-01-01

    This paper exploits the relationships among the time-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling errors.

  20. Evaluating Error of LIDAR Derived dem Interpolation for Vegetation Area

    NASA Astrophysics Data System (ADS)

    Ismail, Z.; Khanan, M. F. Abdul; Omar, F. Z.; Rahman, M. Z. Abdul; Mohd Salleh, M. R.

    2016-09-01

    Light Detection and Ranging or LiDAR data is a data source for deriving digital terrain model while Digital Elevation Model or DEM is usable within Geographical Information System or GIS. The aim of this study is to evaluate the accuracy of LiDAR derived DEM generated based on different interpolation methods and slope classes. Initially, the study area is divided into three slope classes: (a) slope class one (0° - 5°), (b) slope class two (6° - 10°) and (c) slope class three (11° - 15°). Secondly, each slope class is tested using three distinctive interpolation methods: (a) Kriging, (b) Inverse Distance Weighting (IDW) and (c) Spline. Next, accuracy assessment is done based on field survey tachymetry data. The finding reveals that the overall Root Mean Square Error or RMSE for Kriging provided the lowest value of 0.727 m for both 0.5 m and 1 m spatial resolutions of oil palm area, followed by Spline with values of 0.734 m for 0.5 m spatial resolution and 0.747 m for spatial resolution of 1 m. Concurrently, IDW provided the highest RMSE value of 0.784 m for both spatial resolutions of 0.5 and 1 m. For rubber area, Spline provided the lowest RMSE value of 0.746 m for 0.5 m spatial resolution and 0.760 m for 1 m spatial resolution. The highest value of RMSE for rubber area is IDW with the value of 1.061 m for both spatial resolutions. Finally, Kriging gave the RMSE value of 0.790m for both spatial resolutions.

  1. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  2. "Coded and Uncoded Error Feedback: Effects on Error Frequencies in Adult Colombian EFL Learners' Writing"

    ERIC Educational Resources Information Center

    Sampson, Andrew

    2012-01-01

    This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…

  3. Frequency, types, and direct related costs of medication errors in an academic nephrology ward in Iran.

    PubMed

    Gharekhani, Afshin; Kanani, Negin; Khalili, Hossein; Dashti-Khavidaki, Simin

    2014-09-01

    Medication errors are ongoing problems among hospitalized patients especially those with multiple co-morbidities and polypharmacy such as patients with renal diseases. This study evaluated the frequency, types and direct related cost of medication errors in nephrology ward and the role played by clinical pharmacists. During this study, clinical pharmacists detected, managed, and recorded the medication errors. Prescribing errors including inappropriate drug, dose, or treatment durations were gathered. To assess transcription errors, the equivalence of nursery charts and physician's orders were evaluated. Administration errors were assessed by observing drugs' preparation, storage, and administration by nurses. The changes in medications costs after implementing clinical pharmacists' interventions were compared with the calculated medications costs if the medication errors were continued up to patients' discharge time. More than 85% of patients experienced medication error. The rate of medication errors was 3.5 errors per patient and 0.18 errors per ordered medication. More than 95% of medication errors occurred at prescription nodes. Most common prescribing errors were omission (26.9%) or unauthorized drugs (18.3%) and low drug dosage or frequency (17.3%). Most of the medication errors happened on cardiovascular drugs (24%) followed by vitamins and electrolytes (22.1%) and antimicrobials (18.5%). The number of medication errors was correlated with the number of ordered medications and length of hospital stay. Clinical pharmacists' interventions decreased patients' direct medication costs by 4.3%. About 22% of medication errors led to patients' harm. In conclusion, clinical pharmacists' contributions in nephrology wards were of value to prevent medication errors and to reduce medications cost.

  4. Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Minnett, P. J.

    2014-12-01

    Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.

  5. Frequency of medication errors in an emergency department of a large teaching hospital in southern Iran

    PubMed Central

    Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid

    2014-01-01

    This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391

  6. Disentangling the impacts of outcome valence and outcome frequency on the post-error slowing

    PubMed Central

    Wang, Lijun; Tang, Dandan; Zhao, Yuanfang; Hitchman, Glenn; Wu, Shanshan; Tan, Jinfeng; Chen, Antao

    2015-01-01

    Post-error slowing (PES) reflects efficient outcome monitoring, manifested as slower reaction time after errors. Cognitive control account assumes that PES depends on error information, whereas orienting account posits that it depends on error frequency. This raises the question how the outcome valence and outcome frequency separably influence the generation of PES. To address this issue, we varied the probability of observation errors (50/50 and 20/80, correct/error) the “partner” committed by employing an observation-execution task and investigated the corresponding behavioral and neural effects. On each trial, participants first viewed the outcome of a flanker-run that was supposedly performed by a ‘partner’, and then performed a flanker-run themselves afterwards. We observed PES in the two error rate conditions. However, electroencephalographic data suggested error-related potentials (oERN and oPe) and rhythmic oscillation associated with attentional process (alpha band) were respectively sensitive to outcome valence and outcome frequency. Importantly, oERN amplitude was positively correlated with PES. Taken together, these findings support the assumption of the cognitive control account, suggesting that outcome valence and outcome frequency are both involved in PES. Moreover, the generation of PES is indexed by oERN, whereas the modulation of PES size could be reflected on the alpha band. PMID:25732237

  7. Error-free demodulation of pixelated carrier frequency interferograms.

    PubMed

    Servin, M; Estrada, J C

    2010-08-16

    Recently, pixelated spatial carrier interferograms have been used in optical metrology and are an industry standard nowadays. The main feature of these interferometers is that each pixel over the video camera may be phase-modulated by any (however fixed) desired angle within [0,2pi] radians. The phase at each pixel is shifted without cross-talking from their immediate neighborhoods. This has opened new possibilities for experimental spatial wavefront modulation not dreamed before, because we are no longer constrained to introduce a spatial-carrier using a tilted plane. Any useful mathematical model to phase-modulate the testing wavefront in a pixel-wise basis can be used. However we are nowadays faced with the problem that these pixelated interferograms have not been correctly demodulated to obtain an error-free (exact) wavefront estimation. The purpose of this paper is to offer the general theory that allows one to demodulate, in an exact way, pixelated spatial-carrier interferograms modulated by any thinkable two-dimensional phase carrier.

  8. Phase-modulation method for AWG phase-error measurement in the frequency domain.

    PubMed

    Takada, Kazumasa; Hirose, Tomohiro

    2009-12-15

    We report a phase-modulation method for measuring arrayed waveguide grating (AWG) phase error in the frequency domain. By combining the method with a digital sampling technique that we have already reported, we can measure the phase error within an accuracy of +/-0.055 rad for the center 90% waveguides in the array even when no carrier frequencies are generated in the beat signal from the interferometer.

  9. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  10. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  11. Error Bounds for Quadrature Methods Involving Lower Order Derivatives

    ERIC Educational Resources Information Center

    Engelbrecht, Johann; Fedotov, Igor; Fedotova, Tanya; Harding, Ansie

    2003-01-01

    Quadrature methods for approximating the definite integral of a function f(t) over an interval [a,b] are in common use. Examples of such methods are the Newton-Cotes formulas (midpoint, trapezoidal and Simpson methods etc.) and the Gauss-Legendre quadrature rules, to name two types of quadrature. Error bounds for these approximations involve…

  12. Direct measurement of the poliovirus RNA polymerase error frequency in vitro

    SciTech Connect

    Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B. )

    1988-02-01

    The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. A fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.

  13. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  14. Real-time drift error compensation in a self-reference frequency-scanning fiber interferometer

    NASA Astrophysics Data System (ADS)

    Tao, Long; Liu, Zhigang; Zhang, Weibo; Liu, Zhe; Hong, Jun

    2017-01-01

    In order to eliminate the fiber drift errors in a frequency-scanning fiber interferometer, we propose a self-reference frequency-scanning fiber interferometer composed of two fiber Michelson interferometers sharing common optical paths of fibers. One interferometer defined as reference interferometer is used to monitor the optical path length drift in real time and establish a measurement fixed origin. The other is used as a measurement interferometer to acquire the information from the target. Because the measured optical path differences of the reference and measurement interferometers by frequency-scanning interferometry include the same fiber drift errors, the errors can be eliminated by subtraction of the former optical path difference from the latter optical path difference. A prototype interferometer was developed in our research, and experimental results demonstrate its robustness and stability.

  15. Online public reactions to frequency of diagnostic errors in US outpatient care

    PubMed Central

    Giardina, Traber Davis; Sarkar, Urmimala; Gourley, Gato; Modi, Varsha; Meyer, Ashley N.D.; Singh, Hardeep

    2016-01-01

    Background Diagnostic errors pose a significant threat to patient safety but little is known about public perceptions of diagnostic errors. A study published in BMJ Quality & Safety in 2014 estimated that diagnostic errors affect at least 5% of US adults (or 12 million) per year. We sought to explore online public reactions to media reports on the reported frequency of diagnostic errors in the US adult population. Methods We searched the World Wide Web for any news article reporting findings from the study. We then gathered all the online comments made in response to the news articles to evaluate public reaction to the newly reported diagnostic error frequency (n=241). Two coders conducted content analyses of the comments and an experienced qualitative researcher resolved differences. Results Overall, there were few comments made regarding the frequency of diagnostic errors. However, in response to the media coverage, 44 commenters shared personal experiences of diagnostic errors. Additionally, commentary centered on diagnosis-related quality of care as affected by two emergent categories: (1) US health care providers (n=79; 63 commenters) and (2) US health care reform-related policies, most commonly the Affordable Care Act (ACA) and insurance/reimbursement issues (n=62; 47 commenters). Conclusion The public appears to have substantial concerns about the impact of the ACA and other reform initiatives on the diagnosis-related quality of care. However, policy discussions on diagnostic errors are largely absent from the current national conversation on improving quality and safety. Because outpatient diagnostic errors have emerged as a major safety concern, researchers and policymakers should consider evaluating the effects of policy and practice changes on diagnostic accuracy. PMID:27347474

  16. Computational procedures for evaluating the sensitivity derivatives of vibration frequencies and Eigenmodes of framed structures

    NASA Technical Reports Server (NTRS)

    Fetterman, Timothy L.; Noor, Ahmed K.

    1987-01-01

    Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.

  17. Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors

    NASA Astrophysics Data System (ADS)

    Yan, Feifei; Chang, Wenge; Li, Xiangyang

    2015-12-01

    Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.

  18. Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS

    NASA Astrophysics Data System (ADS)

    Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin

    2015-08-01

    Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.

  19. Random Numbers Demonstrate the Frequency of Type I Errors: Three Spreadsheets for Class Instruction

    ERIC Educational Resources Information Center

    Duffy, Sean

    2010-01-01

    This paper describes three spreadsheet exercises demonstrating the nature and frequency of type I errors using random number generation. The exercises are designed specifically to address issues related to testing multiple relations using correlation (Demonstration I), t tests varying in sample size (Demonstration II) and multiple comparisons…

  20. Derived flood frequency distributions considering individual event hydrograph shapes

    NASA Astrophysics Data System (ADS)

    Hassini, Sonia; Guo, Yiping

    2017-04-01

    Derived in this paper is the frequency distribution of the peak discharge rate of a random runoff event from a small urban catchment. The derivation follows the derived probability distribution procedure and incorporates a catchment rainfall-runoff model with approximating shapes for individual runoff event hydrographs. In the past, only simple triangular runoff event hydrograph shapes were used, in this study approximating runoff event hydrograph shapes better representing all the possibilities are considered. The resulting closed-form mathematical equations are converted to the commonly required flood frequency distributions for use in urban stormwater management studies. The analytically determined peak discharge rates of different return periods for a wide range of hypothetical catchment conditions were compared to those determined from design storm modeling. The newly derived equations generated results that are closer to those from design storm modeling and provide a better alternative for use in urban stormwater management studies.

  1. A statistical comparison of EEG time- and time-frequency domain representations of error processing.

    PubMed

    Munneke, Gert-Jan; Nap, Tanja S; Schippers, Eveline E; Cohen, Michael X

    2015-08-27

    Successful behavior relies on error detection and subsequent remedial adjustment of behavior. Researchers have identified two electrophysiological signatures of error processing: the time-domain error-related negativity (ERN), and the time-frequency domain increased power in the delta/theta frequency bands (~2-8 Hz). The relationship between these two signatures is not entirely clear: on the one hand they occur after the same type of event and with similar latency, but on the other hand, the time-domain ERP component contains only phase-locked activity whereas the time-frequency response additionally contains non-phase-locked dynamics. Here we examined the ERN and error-related delta/theta activity in relation to each other, focusing on within-subject analyses that utilize single-trial data. Using logistic regression, we constructed three statistical models in which the accuracy of each trial was predicted from the ERN, delta/theta power, or both. We found that both the ERN and delta/theta power worked roughly equally well as predictors of single-trial accuracy (~70% accurate prediction). Furthermore, a model including both measures provided a stronger overall prediction compared to either model alone. Based on these findings two conclusions are drawn: first, the phase-locked part of the EEG signal appears to be roughly as predictive of single-trial response accuracy as the non-phase-locked part; second, the single-trial ERP and delta/theta power contain both overlapping and independent information.

  2. Error analysis for semi-analytic displacement derivatives with respect to shape and sizing variables

    NASA Technical Reports Server (NTRS)

    Fenyes, Peter A.; Lust, Robert V.

    1989-01-01

    Sensitivity analysis is fundamental to the solution of structural optimization problems. Consequently, much research has focused on the efficient computation of static displacement derivatives. As originally developed, these methods relied on analytical representations for the derivatives of the structural stiffness matrix (K) with respect to the design variables (b sub i). To extend these methods for use with complex finite element formulations and facilitate their implementation into structural optimization programs using the general finite element method analysis codes, the semi-analytic method was developed. In this method the matrix the derivative of K/the derivative b sub i is approximated by finite difference. Although it is well known that the accuracy of the semi-analytic method is dependent on the finite difference parameter, recent work has suggested that more fundamental inaccuracies exist in the method when used for shape optimization. Another study has argued qualitatively that these errors are related to nonuniform errors in the stiffness matrix derivatives. The accuracy of the semi-analytic method is investigated. A general framework was developed for the error analysis and then it is shown analytically that the errors in the method are entirely accounted for by errors in delta K/delta b sub i. Furthermore, it is demonstrated that acceptable accuracy in the derivatives can be obtained through careful selection of the finite difference parameter.

  3. Where is the effect of frequency in word production? Insights from aphasic picture naming errors

    PubMed Central

    Kittredge, Audrey K.; Dell, Gary S.; Verkuilen, Jay; Schwartz, Myrna F.

    2010-01-01

    Some theories of lexical access in production locate the effect of lexical frequency at the retrieval of a word’s phonological characteristics, as opposed to the prior retrieval of a holistic representation of the word from its meaning. Yet there is evidence from both normal and aphasic individuals that frequency may influence both of these retrieval processes. This inconsistency is especially relevant in light of recent attempts to determine the representation of another lexical property, age of acquisition or AoA, whose effect is similar to that of frequency. To further explore the representations of these lexical variables in the word retrieval system, we performed hierarchical, multinomial logistic regression analyses of 50 aphasic patients’ picture-naming responses. While both log frequency and AoA had a significant influence on patient accuracy and led to fewer phonologically related errors and omissions, only log frequency had an effect on semantically related errors. These results provide evidence for a lexical access process sensitive to frequency at all stages, but with AoA having a more limited effect. PMID:18704797

  4. Phoneme frequency effects in jargon aphasia: a phonological investigation of nonword errors.

    PubMed

    Robson, Jo; Pring, Tim; Marshall, Jane; Chiat, Shula

    2003-04-01

    This study investigates the nonwords produced by a jargon speaker, LT. Despite presenting with severe neologistic jargon, LT can produce discrete responses in picture naming tasks thus allowing the properties of his jargon to be investigated. This ability was exploited in two naming tasks. The first showed that LT's nonword errors are related to their targets despite being generally unrecognizable. This relatedness appears to be a general property of his errors suggesting that they are produced by lexical rather than nonlexical means. The second naming task used a set of stimuli controlled for their phonemic content. This allowed an investigation of target phonology at the level of individual phonemes. Nonword responses maintained the English distribution of consonants and showed a significant relationship to the target phonologies. A strong influence of phoneme frequency was identified. High frequency consonants showed a pattern of frequent but indiscriminate use. Low frequency consonants were realised less often but were largely restricted to target related contexts rarely appearing as error phonology. The findings are explained within a lexical activation network with the proposal that the resting levels of phoneme nodes are frequency sensitive. Predictions for the recovery of jargon aphasia and suggestions for future investigations are made.

  5. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  6. Error detection and correction for a multiple frequency quaternary phase shift keyed signal

    NASA Astrophysics Data System (ADS)

    Hopkins, Kevin S.

    1989-06-01

    A multiple frequency quaternary phased shift (MFQPSK) signaling system was developed and experimentally tested in a controlled environment. In order to insure that the quality of the received signal is such that information recovery is possible, error detection/correction (EDC) must be used. Various EDC coding schemes available are reviewed and their application to the MFQPSK signal system is analyzed. Hamming, Golay, Bose-Chaudhuri-Hocquenghem (BCH), Reed-Solomon (R-S) block codes as well as convolutional codes are presented and analyzed in the context of specific MFQPSK system parameters. A computer program was developed in order to compute bit error probabilities as a function of signal to noise ratio. Results demonstrate that various EDC schemes are suitable for the MFQPSK signal structure, and that significant performance improvements are possible with the use of certain error correction codes.

  7. Performance evaluation of pitch lap in correcting mid-spatial-frequency errors under different smoothing parameters

    NASA Astrophysics Data System (ADS)

    Xu, Lichao; Wan, Yongjian; Liu, Haitao; Wang, Jia

    2016-10-01

    Smoothing is a convenient and efficient way to restrain middle spatial frequency (MSF) errors. Based on the experience, lap diameter, rotation speed, lap pressure and the hardness of pitch layer are important to correcting MSF errors. Therefore, nine groups of experiments are designed with the orthogonal method to confirm the significance of the above parameters. Based on the Zhang's model, PV (Peak and Valley) and RMS (Root Mean Square) versus processing cycles are analyzed before and after smoothing. At the same time, the smoothing limit and smoothing rate for different parameters to correct MSF errors are analyzed. Combined with the deviation analysis, we distinguish between dominant and subordinate parameters, and find out the optimal combination and law of various parameters, so as to guide the further research and fabrication.

  8. Robust nonstationary jammer mitigation for GPS receivers with instantaneous frequency error tolerance

    NASA Astrophysics Data System (ADS)

    Wang, Ben; Zhang, Yimin D.; Qin, Si; Amin, Moeness G.

    2016-05-01

    In this paper, we propose a nonstationary jammer suppression method for GPS receivers when the signals are sparsely sampled. Missing data samples induce noise-like artifacts in the time-frequency (TF) distribution and ambiguity function of the received signals, which lead to reduced capability and degraded performance in jammer signature estimation and excision. In the proposed method, a data-dependent TF kernel is utilized to mitigate the artifacts and sparse reconstruction methods are then applied to obtain instantaneous frequency (IF) estimation of the jammers. In addition, an error tolerance of the IF estimate is applied is applied to achieve robust jammer suppression performance in the presence of IF estimation inaccuracy.

  9. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  10. A Reduced-frequency Approach for Calculating Dynamic Derivatives

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.

    2005-01-01

    Computational Fluid Dynamics (CFD) is increasingly being used to both augment and create an aerodynamic performance database for aircraft configurations. This aerodynamic database contains the response of the aircraft to varying flight conditions and control surface deflections. The current work presents a novel method for calculating dynamic stability derivatives which reduces the computational cost over traditional unsteady CFD approaches by an order of magnitude, while still being applicable to arbitrarily complex geometries over a wide range of flow regimes. The primary thesis of this work is that the response to a forced motion can often be represented with a small, predictable number of frequency components without loss of accuracy. By resolving only those frequencies of interest, the computational effort is significantly reduced so that the routine calculation of dynamic derivatives becomes practical. The current implementation uses this same non-linear, frequency-domain approach and extends the application to the 3-D Euler equations. The current work uses a Cartesian, embedded-boundary method to automate the generation of dynamic stability derivatives.

  11. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE PAGES

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-21

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors  < ∼ 2°, than those from the three empirical models with averaged errors  > ∼ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  12. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    NASA Astrophysics Data System (ADS)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-01

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.

  13. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  14. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGES

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  15. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.; Griffin, John C.

    2015-07-01

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. Using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  16. Improving transient performance of adaptive control architectures using frequency-limited system error dynamics

    NASA Astrophysics Data System (ADS)

    Yucelen, Tansel; De La Torre, Gerardo; Johnson, Eric N.

    2014-11-01

    Although adaptive control theory offers mathematical tools to achieve system performance without excessive reliance on dynamical system models, its applications to safety-critical systems can be limited due to poor transient performance and robustness. In this paper, we develop an adaptive control architecture to achieve stabilisation and command following of uncertain dynamical systems with improved transient performance. Our framework consists of a new reference system and an adaptive controller. The proposed reference system captures a desired closed-loop dynamical system behaviour modified by a mismatch term representing the high-frequency content between the uncertain dynamical system and this reference system, i.e., the system error. In particular, this mismatch term allows the frequency content of the system error dynamics to be limited, which is used to drive the adaptive controller. It is shown that this key feature of our framework yields fast adaptation without incurring high-frequency oscillations in the transient performance. We further show the effects of design parameters on the system performance, analyse closeness of the uncertain dynamical system to the unmodified (ideal) reference system, discuss robustness of the proposed approach with respect to time-varying uncertainties and disturbances, and make connections to gradient minimisation and classical control theory. A numerical example is provided to demonstrate the efficacy of the proposed architecture.

  17. On the errors in molecular dipole moments derived from accurate diffraction data.

    PubMed

    Coppens; Volkov; Abramov; Koritsanszky

    1999-09-01

    The error in the molecular dipole moment as derived from accurate X-ray diffraction data is shown to be origin dependent in the general case. It is independent of the choice of origin if an electroneutrality constraint is introduced, even when additional constraints are applied to the monopole populations. If a constraint is not applied to individual moieties, as is appropriate for multicomponent crystals or crystals containing molecular ions, the geometric center of the entity considered is a suitable choice of origin for the error treatment.

  18. Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint

    SciTech Connect

    Florita, A.; Hodge, B. M.; Milligan, M.

    2012-08-01

    The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.

  19. Error Probability of MRC in Frequency Selective Nakagami Fading in the Presence of CCI and ACI

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sum, Chin-Sean; Funada, Ryuhei; Sasaki, Shigenobu; Baykas, Tuncer; Wang, Junyi; Harada, Hiroshi; Kato, Shuzo

    An exact expression of error rate is developed for maximal ratio combining (MRC) in an independent but not necessarily identically distributed frequency selective Nakagami fading channel taking into account inter-symbol, co-channel and adjacent channel interferences (ISI, CCI and ACI respectively). The characteristic function (CF) method is adopted. While accurate analysis of MRC performance cannot be seen in frequency selective channel taking ISI (and CCI) into account, such analysis for ACI has not been addressed yet. The general analysis presented in this paper solves a problem of past and present interest, which has so far been studied either approximately or in simulations. The exact method presented also lets us obtain an approximate error rate expression based on Gaussian approximation (GA) of the interferences. It is shown, especially while the channel is lightly faded, has fewer multipath components and a decaying delay profile, the GA may be substantially inaccurate at high signal-to-noise ratio. However, the exact results also reveal an important finding that there is a range of parameters where the simpler GA is reasonably accurate and hence, we don't have to go for more involved exact expression.

  20. Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum

    NASA Astrophysics Data System (ADS)

    Orus Perez, Raul

    2017-04-01

    For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.

  1. Ionospheric error contribution to GNSS single-frequency navigation at the 2014 solar maximum

    NASA Astrophysics Data System (ADS)

    Orus Perez, Raul

    2016-11-01

    For single-frequency users of the global satellite navigation system (GNSS), one of the main error contributors is the ionospheric delay, which impacts the received signals. As is well-known, GPS and Galileo transmit global models to correct the ionospheric delay, while the international GNSS service (IGS) computes precise post-process global ionospheric maps (GIM) that are considered reference ionospheres. Moreover, accurate ionospheric maps have been recently introduced, which allow for the fast convergence of the real-time precise point position (PPP) globally. Therefore, testing of the ionospheric models is a key issue for code-based single-frequency users, which constitute the main user segment. Therefore, the testing proposed in this paper is straightforward and uses the PPP modeling applied to single- and dual-frequency code observations worldwide for 2014. The usage of PPP modeling allows us to quantify—for dual-frequency users—the degradation of the navigation solutions caused by noise and multipath with respect to the different ionospheric modeling solutions, and allows us, in turn, to obtain an independent assessment of the ionospheric models. Compared to the dual-frequency solutions, the GPS and Galileo ionospheric models present worse global performance, with horizontal root mean square (RMS) differences of 1.04 and 0.49 m and vertical RMS differences of 0.83 and 0.40 m, respectively. While very precise global ionospheric models can improve the dual-frequency solution globally, resulting in a horizontal RMS difference of 0.60 m and a vertical RMS difference of 0.74 m, they exhibit a strong dependence on the geographical location and ionospheric activity.

  2. Minimizing systematic errors in phytoplankton pigment concentration derived from satellite ocean color measurements

    SciTech Connect

    Martin, D.L.

    1992-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.

  3. Wrongful Conviction: Perceptions of Criminal Justice Professionals Regarding the Frequency of Wrongful Conviction and the Extent of System Errors

    ERIC Educational Resources Information Center

    Ramsey, Robert J.; Frank, James

    2007-01-01

    Drawing on a sample of 798 Ohio criminal justice professionals (police, prosecutors, defense attorneys, judges), the authors examine respondents' perceptions regarding the frequency of system errors (i.e., professional error and misconduct suggested by previous research to be associated with wrongful conviction), and wrongful felony conviction.…

  4. Spatial Distribution of the Errors in Modeling the Mid-Latitude Critical Frequencies by Different Models

    NASA Astrophysics Data System (ADS)

    Kilifarska, N. A.

    There are some models that describe the spatial distribution of greatest frequency yielding reflection from the F2 ionospheric layer (foF2). However, the distribution of the models' errors over the globe and how they depend on seasons, solar activity, etc., are unknown till this time. So the aim of the present paper is to compare the accuracy in describing the latitudinal and longitudinal variation of the mid-latitude maximum electron density, of CCIR, URSI, and a new created theoretical model. A comparison between the above mentioned models and all available from Boulder's data bank VI data (among 35 deg and 70 deg) have been made. Data for three whole years with different solar activity - 1976 (F_10.7 = 73.6), 1981 (F_10.7 = 20.6), 1983 (F_10.7 = 119.6) have been compared. The final results show that: 1. the areas with greatest and smallest errors depend on UT, season and solar activity; 2. the error distribution of CCIR and URSI models are very similar and are not coincident with these ones of theoretical model. The last result indicates that the theoretical model, described briefly bellow, may be a real alternative to the empirical CCIR and URSI models. The different spatial distribution of the models' errors gives a chance for the users to choose the most appropriate model, depending on their needs. Taking into account that the theoretical models have equal accuracy in region with many or without any ionosonde station, this result shows that our model can be used to improve the global mapping of the mid-latitude ionosphere. Moreover, if Re values of the input aeronomical parameters (neutral composition, temperatures and winds), are used - it may be expected that this theoretical model can be applied for Re or almost Re-time mapping of the main ionospheric parameters (foF2 and hmF2).

  5. Random errors of oceanic monthly rainfall derived from SSM/I using probability distribution functions

    NASA Technical Reports Server (NTRS)

    Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.

    1993-01-01

    Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.

  6. Estimates of ocean forecast error covariance derived from Hessian Singular Vectors

    NASA Astrophysics Data System (ADS)

    Smith, Kevin D.; Moore, Andrew M.; Arango, Hernan G.

    2015-05-01

    Experience in numerical weather prediction suggests that singular value decomposition (SVD) of a forecast can yield useful a priori information about the growth of forecast errors. It has been shown formally that SVD using the inverse of the expected analysis error covariance matrix to define the norm at initial time yields the Empirical Orthogonal Functions (EOFs) of the forecast error covariance matrix at the final time. Because of their connection to the 2nd derivative of the cost function in 4-dimensional variational (4D-Var) data assimilation, the initial time singular vectors defined in this way are often referred to as the Hessian Singular Vectors (HSVs). In the present study, estimates of ocean forecast errors and forecast error covariance were computed using SVD applied to a baroclinically unstable temperature front in a re-entrant channel using the Regional Ocean Modeling System (ROMS). An identical twin approach was used in which a truth run of the model was sampled to generate synthetic hydrographic observations that were then assimilated into the same model started from an incorrect initial condition using 4D-Var. The 4D-Var system was run sequentially, and forecasts were initialized from each ocean analysis. SVD was performed on the resulting forecasts to compute the HSVs and corresponding EOFs of the expected forecast error covariance matrix. In this study, a reduced rank approximation of the inverse expected analysis error covariance matrix was used to compute the HSVs and EOFs based on the Lanczos vectors computed during the 4D-Var minimization of the cost function. This has the advantage that the entire spectrum of HSVs and EOFs in the reduced space can be computed. The associated singular value spectrum is found to yield consistent and reliable estimates of forecast error variance in the space spanned by the EOFs. In addition, at long forecast lead times the resulting HSVs and companion EOFs are able to capture many features of the actual

  7. Demonstration of the frequency offset errors introduced by an incorrect setting of the Zeeman/magnetic field adjustment on the cesium beam frequency standard

    NASA Technical Reports Server (NTRS)

    Kaufmann, D. C.

    1976-01-01

    The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.

  8. Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker.

    PubMed

    Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun

    2016-10-12

    The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers' attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers' attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers' LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95'', 25.14'', 82.43''], 3σ to [16.12'', 15.89'', 53.27''], 3σ.

  9. Modeling work zone crash frequency by quantifying measurement errors in work zone length.

    PubMed

    Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet

    2013-06-01

    Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence.

  10. Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker

    PubMed Central

    Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun

    2016-01-01

    The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers’ attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers’ attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers’ LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95′′, 25.14′′, 82.43′′], 3σ to [16.12′′, 15.89′′, 53.27′′], 3σ. PMID:27754320

  11. Lexical Frequency and Third-Graders' Stress Accuracy in Derived English Word Production

    ERIC Educational Resources Information Center

    Jarmulowicz, Linda; Taran, Valentina L.; Hay, Sarah E.

    2008-01-01

    This study examined the effects of lexical frequency on children's production of accurate primary stress in words derived with nonneutral English suffixes. Forty-four third-grade children participated in an elicited derived word task in which they produced high-frequency, low-frequency, and nonsense-derived words with stress-changing suffixes…

  12. Determination of carboxyhemoglobin in heated blood--sources of error and utility of derivative spectrophotometry.

    PubMed

    Fukui, Y; Matsubara, M; Akane, A; Hama, K; Matsubara, K; Takahashi, S

    1985-01-01

    The cause for discrepancies in results from different methods of the carboxyhemoglobin (HbCO) analysis on the blood from bodies of burn victims was investigated. Blood samples with 0, 50, and 100% carbon monoxide (CO) saturation were heated at various temperatures for some time and then analyzed. Carboxyhemoglobin content was determined by the fourth-derivative spectrophotometric method and compared with results from the usual two-wavelength method. For total hemoglobin measurement, the fourth-derivative technique and cyanmethemoglobin method were used. Turbidity in blood samples, which occurred when samples were heated above 50 degrees C, affected the analysis. At about 70 degrees C, coagulation and hemoglobin degeneration occurred accelerating the errors of determined values. The fourth-derivative technique, however, proved to be independent of the turbidity and would be useful for the analysis on the blood without hemoglobin degeneration.

  13. Spindle error motion measurement using concentric circle grating and sinusoidal frequency-modulated semiconductor lasers

    NASA Astrophysics Data System (ADS)

    Higuchi, Masato; Vu, Thanh-Tung; Aketagawa, Masato

    2016-11-01

    The conventional method of measuring the radial, axial and angular spindle motion is complicated and needs large spaces. Smaller instrument is better in terms of accurate and practical measurement. A method of measuring spindle error motion using a sinusoidal phase modulation and a concentric circle grating was described in the past. In the method, the concentric circle grating with fine pitch is attached to the spindle. Three optical sensors are fixed under grating and observe appropriate position of grating. The each optical sensor consists of a sinusoidal frequency modulated semiconductor laser as the light source, and two interferometers. One interferometer measures an axial spindle motion by detecting the interference fringe between reflected beam from fixed mirror and 0th-order diffracted beam. Another interferometer measures a radial spindle motion by detecting the interference fringe between ±2nd-order diffracted beams. With these optical sensor, 3 axial and 3 radial displacement of grating can be measured. From these measured displacements, axial, radial and angular spindle motion is calculated concurrently. In the previous experiment, concurrent measurement of the one axial and one radial spindle displacement at 4rpm was described. In this paper, the sinusoidal frequency modulation realized by modulating injection current is used instead of the sinusoidal phase modulation, which contributes simplicity of the instrument. Furthermore, concurrent measurement of the 5 axis (1 axial, 2 radial and 2 angular displacements) spindle motion at 4000rpm may be described.

  14. Frequency domain analysis of errors in cross-correlations of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri

    2016-12-01

    We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method

  15. Propagation of Forecast Errors from the Sun to LEO Trajectories: How Does Drag Uncertainty Affect Conjunction Frequency?

    DTIC Science & Technology

    2014-09-01

    density is in turn strongly controlled by incident ultraviolet radiation from the sun . Accordingly, modeling and forecasting upper atmospheric density...Propagation of Forecast Errors from the Sun to LEO Trajectories: How Does Drag Uncertainty Affect Conjunction Frequency? John Emmert, Jeff Byers...trajectories of most objects in low- Earth orbit, and solar variability is the largest source of error in upper atmospheric density forecasts. There is

  16. The frequency of translational misreading errors in E. coli is largely determined by tRNA competition.

    PubMed

    Kramer, Emily B; Farabaugh, Philip J

    2007-01-01

    Estimates of missense error rates (misreading) during protein synthesis vary from 10(-3) to 10(-4) per codon. The experiments reporting these rates have measured several distinct errors using several methods and reporter systems. Variation in reported rates may reflect real differences in rates among the errors tested or in sensitivity of the reporter systems. To develop a more accurate understanding of the range of error rates, we developed a system to quantify the frequency of every possible misreading error at a defined codon in Escherichia coli. This system uses an essential lysine in the active site of firefly luciferase. Mutations in Lys529 result in up to a 1600-fold reduction in activity, but the phenotype varies with amino acid. We hypothesized that residual activity of some of the mutant genes might result from misreading of the mutant codons by tRNA(Lys) (UUUU), the cognate tRNA for the lysine codons, AAA and AAG. Our data validate this hypothesis and reveal details about relative missense error rates of near-cognate codons. The error rates in E. coli do, in fact, vary widely. One source of variation is the effect of competition by cognate tRNAs for the mutant codons; higher error frequencies result from lower competition from low-abundance tRNAs. We also used the system to study the effect of ribosomal protein mutations known to affect error rates and the effect of error-inducing antibiotics, finding that they affect misreading on only a subset of near-cognate codons and that their effect may be less general than previously thought.

  17. Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators

    NASA Astrophysics Data System (ADS)

    Melnychuk, O.; Grassellino, A.; Romanenko, A.

    2014-12-01

    In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].

  18. Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators.

    PubMed

    Melnychuk, O; Grassellino, A; Romanenko, A

    2014-12-01

    In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].

  19. Deriving tight error-trade-off relations for approximate joint measurements of incompatible quantum observables

    NASA Astrophysics Data System (ADS)

    Branciard, Cyril

    2014-02-01

    The quantification of the "measurement uncertainty"aspect of Heisenberg's uncertainty principle—that is, the study of trade-offs between accuracy and disturbance, or between accuracies in an approximate joint measurement on two incompatible observables—has regained a lot of interest recently. Several approaches have been proposed and debated. In this paper we consider Ozawa's definitions for inaccuracies (as root-mean-square errors) in approximate joint measurements, and study how these are constrained in different cases, whether one specifies certain properties of the approximations—namely their standard deviations and/or their bias—or not. Extending our previous work [C. Branciard, Proc. Natl. Acad. Sci. USA 110, 6742 (2013), 10.1073/pnas.1219331110], we derive error-trade-off relations, which we prove to be tight for pure states. We show explicitly how all previously known relations for Ozawa's inaccuracies follow from ours. While our relations are in general not tight for mixed states, we show how these can be strengthened and how tight relations can still be obtained in that case.

  20. Every photon counts: improving low, mid, and high-spatial frequency errors on astronomical optics and materials with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul

    2016-07-01

    Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF

  1. Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat

    PubMed Central

    de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A.; Kölzsch, Andrea; Prins, Herbert H. T.; de Boer, W. Fred

    2015-01-01

    The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations

  2. Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat.

    PubMed

    de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A; Kölzsch, Andrea; Prins, Herbert H T; de Boer, W Fred

    2015-01-01

    The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations

  3. Radio Frequency Identification (RFID) in medical environment: Gaussian Derivative Frequency Modulation (GDFM) as a novel modulation technique with minimal interference properties.

    PubMed

    Rieche, Marie; Komenský, Tomás; Husar, Peter

    2011-01-01

    Radio Frequency Identification (RFID) systems in healthcare facilitate the possibility of contact-free identification and tracking of patients, medical equipment and medication. Thereby, patient safety will be improved and costs as well as medication errors will be reduced considerably. However, the application of RFID and other wireless communication systems has the potential to cause harmful electromagnetic disturbances on sensitive medical devices. This risk mainly depends on the transmission power and the method of data communication. In this contribution we point out the reasons for such incidents and give proposals to overcome these problems. Therefore a novel modulation and transmission technique called Gaussian Derivative Frequency Modulation (GDFM) is developed. Moreover, we carry out measurements to show the inteference properties of different modulation schemes in comparison to our GDFM.

  4. System for simultaneously measuring 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser.

    PubMed

    Cui, Cunxing; Feng, Qibo; Zhang, Bin; Zhao, Yuqiong

    2016-03-21

    A novel method for simultaneously measuring six degree-of-freedom (6DOF) geometric motion errors is proposed in this paper, and the corresponding measurement instrument is developed. Simultaneous measurement of 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser is accomplished for the first time to the best of the authors' knowledge. Dual-frequency laser beams that are orthogonally linear polarized were adopted as the measuring datum. Positioning error measurement was achieved by heterodyne interferometry, and other 5DOF geometric motion errors were obtained by fiber collimation measurement. A series of experiments was performed to verify the effectiveness of the developed instrument. The experimental results showed that the stability and accuracy of the positioning error measurement are 31.1 nm and 0.5 μm, respectively. For the straightness error measurements, the stability and resolution are 60 and 40 nm, respectively, and the maximum deviation of repeatability is ± 0.15 μm in the x direction and ± 0.1 μm in the y direction. For pitch and yaw measurements, the stabilities are 0.03″ and 0.04″, the maximum deviations of repeatability are ± 0.18″ and ± 0.24″, and the accuracies are 0.4″ and 0.35″, respectively. The stability and resolution of roll measurement are 0.29″ and 0.2″, respectively, and the accuracy is 0.6″.

  5. Exploring the Derivative Suffix Frequency Effect in Spanish Speaking Children

    ERIC Educational Resources Information Center

    Lázaro, Miguel; Acha, Joana; de la Rosa, Saray; García, Seila; Sainz, Javier

    2017-01-01

    This study was designed to examine the developmental course of the suffix frequency effect and its role in the development of automatic morpho-lexical access. In Spanish, a highly transparent language from an orthographic point of view, this effect has been shown to be facilitative in adults, but the evidence with children is still inconclusive. A…

  6. The Vertical Error Characteristics of GOES-derived Winds: Description and Impact on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A

  7. STATISTICAL DISTRIBUTIONS OF PARTICULATE MATTER AND THE ERROR ASSOCIATED WITH SAMPLING FREQUENCY. (R828678C010)

    EPA Science Inventory

    The distribution of particulate matter (PM) concentrations has an impact on human health effects and the setting of PM regulations. Since PM is commonly sampled on less than daily schedules, the magnitude of sampling errors needs to be determined. Daily PM data from Spokane, W...

  8. Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2013-09-30

    Structural Instability and Model Error Andrew J. Majda New York University Courant Institute of Mathematical Sciences 251 Mercer Street New York, NY...Majda and his DRI post doc Sapsis have achieved a potential major breakthrough with a new class of methods for UQ. Turbulent dynamical systems are...uncertain initial data. These key physical quantities are often characterized by the degrees of freedom which carry the largest energy or variance and

  9. Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2012-09-30

    Instability and Model Error Principal Investigator: Andrew J. Majda Institution: New York University Courant Institute of Mathematical ...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) New York University, Courant Institute of Mathematical ...for the Special Volume of Communications on Pure and Applied Mathematics for 75th Anniversary of the Courant Institute, April 12, 2012, doi: 10.1002

  10. Statistical Analysis of Instantaneous Frequency Scaling Factor as Derived From Optical Disdrometer Measurements At KQ Bands

    NASA Technical Reports Server (NTRS)

    Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo

    2016-01-01

    The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer. However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop size

  11. Statistical Analysis of Instantaneous Frequency Scaling Factor as Derived From Optical Disdrometer Measurements At KQ Bands

    NASA Technical Reports Server (NTRS)

    Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo

    2016-01-01

    The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer [1]). However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link [2]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop

  12. MODIS Cloud Optical Property Retrieval Uncertainties Derived from Pixel-Level Radiometric Error Estimates

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong

    2011-01-01

    MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.

  13. Frequency and Distribution of Refractive Error in Adult Life: Methodology and Findings of the UK Biobank Study

    PubMed Central

    Cumberland, Phillippa M.; Bao, Yanchun; Hysi, Pirro G.; Foster, Paul J.; Hammond, Christopher J.; Rahi, Jugnoo S.

    2015-01-01

    Purpose To report the methodology and findings of a large scale investigation of burden and distribution of refractive error, from a contemporary and ethnically diverse study of health and disease in adults, in the UK. Methods U K Biobank, a unique contemporary resource for the study of health and disease, recruited more than half a million people aged 40–69 years. A subsample of 107,452 subjects undertook an enhanced ophthalmic examination which provided autorefraction data (a measure of refractive error). Refractive error status was categorised using the mean spherical equivalent refraction measure. Information on socio-demographic factors (age, gender, ethnicity, educational qualifications and accommodation tenure) was reported at the time of recruitment by questionnaire and face-to-face interview. Results Fifty four percent of participants aged 40–69 years had refractive error. Specifically 27% had myopia (4% high myopia), which was more common amongst younger people, those of higher socio-economic status, higher educational attainment, or of White or Chinese ethnicity. The frequency of hypermetropia increased with age (7% at 40–44 years increasing to 46% at 65–69 years), was higher in women and its severity was associated with ethnicity (moderate or high hypermetropia at least 30% less likely in non-White ethnic groups compared to White). Conclusions Refractive error is a significant public health issue for the UK and this study provides contemporary data on adults for planning services, health economic modelling and monitoring of secular trends. Further investigation of risk factors is necessary to inform strategies for prevention. There is scope to do this through the planned longitudinal extension of the UK Biobank study. PMID:26430771

  14. Deriving comprehensive error breakdown for wide field adaptive optics systems using end-to-end simulations

    NASA Astrophysics Data System (ADS)

    Ferreira, F.; Gendron, E.; Rousset, G.; Gratadour, D.

    2016-07-01

    The future European Extremely Large Telescope (E-ELT) adaptive optics (AO) systems will aim at wide field correction and large sky coverage. Their performance will be improved by using post processing techniques, such as point spread function (PSF) deconvolution. The PSF estimation involves characterization of the different error sources in the AO system. Such error contributors are difficult to estimate: simulation tools are a good way to do that. We have developed in COMPASS (COMputing Platform for Adaptive opticS Systems), an end-to-end simulation tool using GPU (Graphics Processing Unit) acceleration, an estimation tool that provides a comprehensive error budget by the outputs of a single simulation run.

  15. Lower Bounds on the Frequency Estimation Error in Magnetically Coupled MEMS Resonant Sensors.

    PubMed

    Paden, Brad E

    2016-02-01

    MEMS inductor-capacitor (LC) resonant pressure sensors have revolutionized the treatment of abdominal aortic aneurysms. In contrast to electrostatically driven MEMS resonators, these magnetically coupled devices are wireless so that they can be permanently implanted in the body and can communicate to an external coil via pressure-induced frequency modulation. Motivated by the importance of these sensors in this and other applications, this paper develops relationships among sensor design variables, system noise levels, and overall system performance. Specifically, new models are developed that express the Cramér-Rao lower bound for the variance of resonator frequency estimates in terms of system variables through a system of coupled algebraic equations, which can be used in design and optimization. Further, models are developed for a novel mechanical resonator in addition to the LC-type resonators.

  16. Error correction coding for frequency-hopping multiple-access spread spectrum communication systems

    NASA Technical Reports Server (NTRS)

    Healy, T. J.

    1982-01-01

    A communication system which would effect channel coding for frequency-hopped multiple-access is described. It is shown that in theory coding can increase the spectrum utilization efficiency of a system with mutual interference to 100 percent. Various coding strategies are discussed and some initial comparisons are given. Some of the problems associated with implementing the type of system described here are discussed.

  17. A Modified Error in Constitutive Equation Approach for Frequency-Domain Viscoelasticity Imaging Using Interior Data

    PubMed Central

    Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc

    2015-01-01

    This paper presents a methodology for the inverse identification of linearly viscoelastic material parameters in the context of steady-state dynamics using interior data. The inverse problem of viscoelasticity imaging is solved by minimizing a modified error in constitutive equation (MECE) functional, subject to the conservation of linear momentum. The treatment is applicable to configurations where boundary conditions may be partially or completely underspecified. The MECE functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, and also incorporates the measurement data in a quadratic penalty term. Regularization of the problem is achieved through a penalty parameter in combination with the discrepancy principle due to Morozov. Numerical results demonstrate the robust performance of the method in situations where the available measurement data is incomplete and corrupted by noise of varying levels. PMID:26388656

  18. The Use of Radar-Based Products for Deriving Extreme Rainfall Frequencies Using Regional Frequency Analysis with Application in South Louisiana

    NASA Astrophysics Data System (ADS)

    El-Dardiry, H. A.; Habib, E. H.

    2014-12-01

    Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the

  19. Consistency of Crustal Loading Signals Derived from Models and GPS: Inferences for GPS Positioning Errors

    NASA Astrophysics Data System (ADS)

    Ray, J.; Collilieux, X.; Rebischung, P.; van Dam, T. M.; Altamimi, Z.

    2011-12-01

    After applying corrections for surface load displacements to a set of station position time series determined using the Global Positioning System (GPS), we are able to infer precise error floors for the determinations of weekly dN, dE, and dU components. The load corrections are a combination of NCEP atmosphere, ECCO non-tidal ocean, and LDAS surface water models, after detrending and averaging to the middle of each GPS week. These load corrections have been applied to the most current station time series from the International GNSS Service (IGS) for a global set of 706 stations, each having more than 100 weekly observations. The stacking of the weekly IGS frame solutions has taken utmost care to minimize aliasing of local load signals into the frame parameters to ensure the most reliable time series of individual station motions. For the first time, dN and dE horizontal components have been considered together with the height (dU) variations. By examining the distributions of annual amplitudes versus WRMS scatters for all 706 stations and all three local components, we find an empirical error floor of about 0.65, 0.7, and 2.2 mm for weekly dN, dE, and dU. Only the very best performing GPS stations approach these floors. Most stations have larger scatters due to other non-load errors. These global error floors have been verified by studying differences for a subset of 119 station pairs located within 25 km of each other. Of these, 19 pairs share a common antenna, which permits an estimate of the fundamental electronic noise in the GPS estimates: 0.4, 0.4, and 1.3 mm for dN, dE, and dU. The remaining 100 close pairs that do not share an antenna include this noise component as well as errors due to multipath, equipment differences, data modeling, etc, but not due to loading or direct orbit effects since those are removed by the differencing. The WRMS dN, dE, and dU differences for these close pairs imply station error floors of 0.8, 0.9, and 2.1 mm, respectively

  20. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  1. A semiempirical error estimation technique for PWV derived from atmospheric radiosonde data

    NASA Astrophysics Data System (ADS)

    Castro-Almazán, Julio A.; Pérez-Jordán, Gabriel; Muñoz-Tuñón, Casiana

    2016-09-01

    A semiempirical method for estimating the error and optimum number of sampled levels in precipitable water vapour (PWV) determinations from atmospheric radiosoundings is proposed. Two terms have been considered: the uncertainties in the measurements and the sampling error. Also, the uncertainty has been separated in the variance and covariance components. The sampling and covariance components have been modelled from an empirical dataset of 205 high-vertical-resolution radiosounding profiles, equipped with Vaisala RS80 and RS92 sondes at four different locations: Güímar (GUI) in Tenerife, at sea level, and the astronomical observatory at Roque de los Muchachos (ORM, 2300 m a.s.l.) on La Palma (both on the Canary Islands, Spain), Lindenberg (LIN) in continental Germany, and Ny-Ålesund (NYA) in the Svalbard Islands, within the Arctic Circle. The balloons at the ORM were launched during intensive and unique site-testing runs carried out in 1990 and 1995, while the data for the other sites were obtained from radiosounding stations operating for a period of 1 year (2013-2014). The PWV values ranged between ˜ 0.9 and ˜ 41 mm. The method sub-samples the profile for error minimization. The result is the minimum error and the optimum number of levels. The results obtained in the four sites studied showed that the ORM is the driest of the four locations and the one with the fastest vertical decay of PWV. The exponential autocorrelation pressure lags ranged from 175 hPa (ORM) to 500 hPa (LIN). The results show a coherent behaviour with no biases as a function of the profile. The final error is roughly proportional to PWV whereas the optimum number of levels (N0) is the reverse. The value of N0 is less than 400 for 77 % of the profiles and the absolute errors are always < 0.6 mm. The median relative error is 2.0 ± 0.7 % and the 90th percentile P90 = 4.6 %. Therefore, whereas a radiosounding samples at least N0 uniform vertical levels, depending on the water

  2. Estimates of Mode-S EHS aircraft-derived wind observation errors using triple collocation

    NASA Astrophysics Data System (ADS)

    de Haan, Siebren

    2016-08-01

    Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method , on two independent observations at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained utilizing information from air traffic control surveillance radar with Selective Mode Enhanced Surveillance capabilities Mode-S EHS, see. Radial wind measurements from Doppler weather radar and wind vector measurements from sodar, together with equivalents from a non-hydrostatic numerical weather prediction model, are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind (zonal and meridional) observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.

  3. Comparison of High-Frequency Solar Irradiance: Ground Measured vs. Satellite-Derived

    SciTech Connect

    Lave, Matthew; Weekley, Andrew

    2016-11-21

    High-frequency solar variability is an important to grid integration studies, but ground measurements are scarce. The high resolution irradiance algorithm (HRIA) has the ability to produce 4-sceond resolution global horizontal irradiance (GHI) samples, at locations across North America. However, the HRIA has not been extensively validated. In this work, we evaluate the HRIA against a database of 10 high-frequency ground-based measurements of irradiance. The evaluation focuses on variability-based metrics. This results in a greater understanding of the errors in the HRIA as well as suggestions for improvement to the HRIA.

  4. Error analysis in the digital elevation model of Kuwait desert derived from repeat pass synthetic aperture radar interferometry

    NASA Astrophysics Data System (ADS)

    Rao, Kota S.; Al Jassar, Hala K.

    2010-09-01

    The aim of this paper is to analyze the errors in the Digital Elevation Models (DEMs) derived through repeat pass SAR interferometry (InSAR). Out of 29 ASAR images available to us, 8 are selected for this study which has unique data set forming 7 InSAR pairs with single master image. The perpendicular component of baseline (B highmod) varies between 200 to 400 m to generate good quality DEMs. The Temporal baseline (T) varies from 35 days to 525 days to see the effect of temporal decorrelation. It is expected that all the DEMs be similar to each other spatially with in the noise limits. However, they differ very much with one another. The 7 DEMs are compared with the DEM of SRTM for the estimation of errors. The spatial and temporal distribution of errors in the DEM is analyzed by considering several case studies. Spatial and temporal variability of precipitable water vapour is analysed. Precipitable water vapour (PWV) corrections to the DEMs are implemented and found to have no significant effect. The reasons are explained. Temporal decorrelation of phases and soil moisture variations seem to have influence on the accuracy of the derived DEM. It is suggested that installing a number of corner reflectors (CRs) and the use of Permanent Scatter approach may improve the accuracy of the results in desert test sites.

  5. Shallow Water Sediment Properties Derived from High-Frequency Shear and Interface Waves

    DTIC Science & Technology

    1992-04-10

    FREQUENCY SHEAR ONR N00014-88-C-1238 AND INTERFACE WAVES 6. AUTHOR(S) JOHN EWING, JERRY A. CARTER, GEORGE H. SUTTON AND NOEL BARSTOW 7. PERFORMING...B4. PAGES 4739--4762. APRIL 10. 1992 Shallow Water Sediment Properties Derived From High-Frequency Shear and Interface Waves JOHN EWING Woods Hole...calculating thickness. The amplitude falloff with range establishes a Q velocity gradients and penetration depths [ Nettleton . 19401 estimate of 40 in

  6. General-form 3-3-3 interpolation kernel and its simplified frequency-response derivation

    NASA Astrophysics Data System (ADS)

    Deng, Tian-Bo

    2016-11-01

    An interpolation kernel is required in a wide variety of signal processing applications such as image interpolation and timing adjustment in digital communications. This article presents a general-form interpolation kernel called 3-3-3 interpolation kernel and derives its frequency response in a closed-form by using a simple derivation method. This closed-form formula is preliminary to designing various 3-3-3 interpolation kernels subject to a set of design constraints. The 3-3-3 interpolation kernel is formed through utilising the third-degree piecewise polynomials, and it is an even-symmetric function. Thus, it will suffice to consider only its right-hand side when deriving its frequency response. Since the right-hand side of the interpolation kernel contains three piecewise polynomials of the third degree, i.e. the degrees of the three piecewise polynomials are (3,3,3), we call it the 3-3-3 interpolation kernel. Once the general-form frequency-response formula is derived, we can systematically formulate the design of various 3-3-3 interpolation kernels subject to a set of design constraints, which are targeted for different interpolation applications. Therefore, the closed-form frequency-response expression is preliminary to the optimal design of various 3-3-3 interpolation kernels. We will use an example to show the optimal design of a 3-3-3 interpolation kernel based on the closed-form frequency-response expression.

  7. Derivatives of buckling loads and vibration frequencies with respect to stiffness and initial strain parameters

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Cohen, Gerald A.; Mroz, Zenon

    1990-01-01

    A uniform variational approach to sensitivity analysis of vibration frequencies and bifurcation loads of nonlinear structures is developed. Two methods of calculating the sensitivities of bifurcation buckling loads and vibration frequencies of nonlinear structures, with respect to stiffness and initial strain parameters, are presented. A direct method requires calculation of derivatives of the prebuckling state with respect to these parameters. An adjoint method bypasses the need for these derivatives by using instead the strain field associated with the second-order postbuckling state. An operator notation is used and the derivation is based on the principle of virtual work. The derivative computations are easily implemented in structural analysis programs. This is demonstrated by examples using a general purpose, finite element program and a shell-of-revolution program.

  8. Assessment of Error in Synoptic-Scale Diagnostics Derived from Wind Profiler and Radiosonde Network Data

    NASA Technical Reports Server (NTRS)

    Mace, Gerald G.; Ackerman, Thomas P.

    1996-01-01

    A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.

  9. GOME total ozone and calibration error derived using Version 8 TOMS Algorithm

    NASA Astrophysics Data System (ADS)

    Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.

    2003-04-01

    The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local structure as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The 1b detector appears to be quite well behaved throughout this time period.

  10. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  11. Deriving frequency-dependent spatial patterns in MEG-derived resting state sensorimotor network: A novel multiband ICA technique.

    PubMed

    Nugent, Allison C; Luber, Bruce; Carver, Frederick W; Robinson, Stephen E; Coppola, Richard; Zarate, Carlos A

    2017-02-01

    Recently, independent components analysis (ICA) of resting state magnetoencephalography (MEG) recordings has revealed resting state networks (RSNs) that exhibit fluctuations of band-limited power envelopes. Most of the work in this area has concentrated on networks derived from the power envelope of beta bandpass-filtered data. Although research has demonstrated that most networks show maximal correlation in the beta band, little is known about how spatial patterns of correlations may differ across frequencies. This study analyzed MEG data from 18 healthy subjects to determine if the spatial patterns of RSNs differed between delta, theta, alpha, beta, gamma, and high gamma frequency bands. To validate our method, we focused on the sensorimotor network, which is well-characterized and robust in both MEG and functional magnetic resonance imaging (fMRI) resting state data. Synthetic aperture magnetometry (SAM) was used to project signals into anatomical source space separately in each band before a group temporal ICA was performed over all subjects and bands. This method preserved the inherent correlation structure of the data and reflected connectivity derived from single-band ICA, but also allowed identification of spatial spectral modes that are consistent across subjects. The implications of these results on our understanding of sensorimotor function are discussed, as are the potential applications of this technique. Hum Brain Mapp 38:779-791, 2017. © 2016 Wiley Periodicals, Inc.

  12. Effects of diffraction by ionospheric electron density irregularities on the range error in GNSS dual-frequency positioning and phase decorrelation

    NASA Astrophysics Data System (ADS)

    Gherm, Vadim E.; Zernov, Nikolay N.; Strangeways, Hal J.

    2011-06-01

    It can be important to determine the correlation of different frequency signals in L band that have followed transionospheric paths. In the future, both GPS and the new Galileo satellite system will broadcast three frequencies enabling more advanced three frequency correction schemes so that knowledge of correlations of different frequency pairs for scintillation conditions is desirable. Even at present, it would be helpful to know how dual-frequency Global Navigation Satellite Systems positioning can be affected by lack of correlation between the L1 and L2 signals. To treat this problem of signal correlation for the case of strong scintillation, a previously constructed simulator program, based on the hybrid method, has been further modified to simulate the fields for both frequencies on the ground, taking account of their cross correlation. Then, the errors in the two-frequency range finding method caused by scintillation have been estimated for particular ionospheric conditions and for a realistic fully three-dimensional model of the ionospheric turbulence. The results which are presented for five different frequency pairs (L1/L2, L1/L3, L1/L5, L2/L3, and L2/L5) show the dependence of diffractional errors on the scintillation index S4 and that the errors diverge from a linear relationship, the stronger are scintillation effects, and may reach up to ten centimeters, or more. The correlation of the phases at spaced frequencies has also been studied and found that the correlation coefficients for different pairs of frequencies depend on the procedure of phase retrieval, and reduce slowly as both the variance of the electron density fluctuations and cycle slips increase.

  13. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  14. Written Type and Token Frequency Measures of Fifty Spanish Derivational Morphemes.

    PubMed

    Lázaro, Miguel; Acha, Joana; Illera, Víctor; Sainz, Javier S

    2016-11-08

    Several databases of written language exist in Spanish that manage important information on the lexical and sublexical characteristics of words. However, there is no database with information on the productivity and frequency of use of derivational suffixes: sublexical units with an essential role in the formation of orthographic representations and lexical access. This work examines these two measures, known as type and token frequencies, for a series of 50 derivational suffixes and their corresponding orthographic endings. Derivational suffixes are differentiated from orthographic endings by eliminating pseudoaffixed words from the list of orthographic endings (cerveza [beer] is a simple word despite its ending in -eza). We provide separate data for child and adult populations, using two databases commonly accessed by psycholinguists conducting research in Spanish. We describe the filtering process used to obtain descriptive data that will provide information for future research on token and type frequencies of morphemes. This database is an important development for researchers focusing on the role of morphology in lexical acquisition and access.

  15. Analysis on error of laser frequency locking for fiber optical receiver in direct detection wind lidar based on Fabry-Perot interferometer and improvements

    NASA Astrophysics Data System (ADS)

    Zhang, Feifei; Dou, Xiankang; Sun, Dongsong; Shu, Zhifeng; Xia, Haiyun; Gao, Yuanyuan; Hu, Dongdong; Shangguan, Mingjia

    2014-12-01

    Direct detection Doppler wind lidar (DWL) has been demonstrated for its capability of atmospheric wind detection ranging from the troposphere to stratosphere with high temporal and spatial resolution. We design and describe a fiber-based optical receiver for direct detection DWL. Then the locking error of the relative laser frequency is analyzed and the dependent variables turn out to be the relative error of the calibrated constant and the slope of the transmission function. For high accuracy measurement of the calibrated constant for a fiber-based system, an integrating sphere is employed for its uniform scattering. What is more, the feature of temporally widening the pulse laser allows more samples be acquired for the analog-to-digital card of the same sampling rate. The result shows a relative error of 0.7% for a calibrated constant. For the latter, a new improved locking filter for a Fabry-Perot Interferometer was considered and designed with a larger slope. With these two strategies, the locking error for the relative laser frequency is calculated to be about 3 MHz, which is equivalent to a radial velocity of about 0.53 m/s and demonstrates the effective improvements of frequency locking for a robust DWL.

  16. Spatial-carrier phase-shifting digital holography utilizing spatial frequency analysis for the correction of the phase-shift error.

    PubMed

    Tahara, Tatsuki; Shimozato, Yuki; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Matoba, Osamu; Kubota, Toshihiro

    2012-01-15

    We propose a single-shot digital holography in which the complex amplitude distribution is obtained by spatial-carrier phase-shifting (SCPS) interferometry and the correction of the inherent phase-shift error occurred in this interferometry. The 0th order diffraction wave and the conjugate image are removed by phase-shifting interferometry and Fourier transform technique, respectively. The inherent error is corrected in the spatial frequency domain. The proposed technique does not require an iteration process to remove the unwanted images and has an advantage in the field of view in comparison to a conventional SCPS technique.

  17. System for adjusting frequency of electrical output pulses derived from an oscillator

    DOEpatents

    Bartholomew, David B.

    2006-11-14

    A system for setting and adjusting a frequency of electrical output pulses derived from an oscillator in a network is disclosed. The system comprises an accumulator module configured to receive pulses from an oscillator and to output an accumulated value. An adjustor module is configured to store an adjustor value used to correct local oscillator drift. A digital adder adds values from the accumulator module to values stored in the adjustor module and outputs their sums to the accumulator module, where they are stored. The digital adder also outputs an electrical pulse to a logic module. The logic module is in electrical communication with the adjustor module and the network. The logic module may change the value stored in the adjustor module to compensate for local oscillator drift or change the frequency of output pulses. The logic module may also keep time and calculate drift.

  18. Performance analysis of multi-frequency topological derivative for reconstructing perfectly conducting cracks

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang

    2017-04-01

    This paper concerns a fast, one-step iterative technique of imaging extended perfectly conducting cracks with Dirichlet boundary condition. In order to reconstruct the shape of cracks from scattered field data measured at the boundary, we introduce a topological derivative-based electromagnetic imaging functional operated at several nonzero frequencies. The structure of the imaging functionals is carefully analyzed by establishing relationships with infinite series of Bessel functions for the configurations of both symmetric and non-symmetric incident field directions. Identified structure explains why the application of incident fields with symmetric direction operated at multiple frequencies guarantees a successful reconstruction. Various numerical simulations with noise-corrupted data are conducted to assess the performance, effectiveness, robustness, and limitations of the proposed technique.

  19. An efficient hybrid causative event-based approach for deriving the annual flood frequency distribution

    NASA Astrophysics Data System (ADS)

    Thyer, Mark; Li, Jing; Lambert, Martin; Kuczera, George; Metcalfe, Andrew

    2015-04-01

    Flood extremes are driven by highly variable and complex climatic and hydrological processes. Derived flood frequency methods are often used to predict the flood frequency distribution (FFD) because they can provide predictions in ungauged catchments and evaluate the impact of land-use or climate change. This study presents recent work on development of a new derived flood frequency method called the hybrid causative events (HCE) approach. The advantage of the HCE approach is that it combines the accuracy of the continuous simulation approach with the computational efficiency of the event-based approaches. Derived flood frequency methods, can be divided into two classes. Event-based approaches provide fast estimation, but can also lead to prediction bias due to limitations of inherent assumptions required for obtaining input information (rainfall and catchment wetness) for events that cause large floods. Continuous simulation produces more accurate predictions, however, at the cost of massive computational time. The HCE method uses a short continuous simulation to provide inputs for a rainfall-runoff model running in an event-based fashion. A proof-of-concept pilot study that the HCE produces estimates of the flood frequency distribution with similar accuracy as the continuous simulation, but with dramatically reduced computation time. Recent work incorporated seasonality into the HCE approach and evaluated with a more realistic set of eight sites from a wide range of climate zones, typical of Australia, using a virtual catchment approach. The seasonal hybrid-CE provided accurate predictions of the FFD for all sites. Comparison with the existing non-seasonal hybrid-CE showed that for some sites the non-seasonal hybrid-CE significantly over-predicted the FFD. Analysis of the underlying cause of whether a site had a high, low or no need to use seasonality found it was based on a combination of reasons, that were difficult to predict apriori. Hence it is recommended

  20. General formalism for the efficient calculation of derivatives of EM frequency-domain responses and derivatives of the misfit

    NASA Astrophysics Data System (ADS)

    Pankratov, Oleg; Kuvshinov, Alexei

    2010-04-01

    Electromagnetic (EM) studies of the Earth have advanced significantly over the past few years. This progress was driven, in particular, by new developments in the methods of 3-D inversion of EM data. Due to the large scale of the 3-D EM inverse problems, iterative gradient-type methods have mostly been employed. In these methods one has to calculate multiple times the gradient of the penalty function-a sum of misfit and regularization terms-with respect to the model parameters. However, even with modern computational capabilities the straightforward calculation of the misfit gradients based on numerical differentiation is extremely time consuming. Much more efficient and elegant way to calculate the gradient of the misfit is provided by the so-called `adjoint' approach. This is now widely used in many 3-D numerical schemes for inverting EM data of different types and origin. It allows the calculation of the misfit gradient for the price of only a few additional forward calculations. In spite of its popularity we did not find in the literature any general description of the approach, which would allow researchers to apply this methodology in a straightforward manner to their scenario of interest. In the paper, we present formalism for the efficient calculation of the derivatives of EM frequency-domain responses and the derivatives of the misfit with respect to variations of 3-D isotropic/anisotropic conductivity. The approach is rather general; it works with single-site responses, multisite responses and responses that include spatial derivatives of EM field. The formalism also allows for various types of parametrization of the 3-D conductivity distribution. Using this methodology one can readily obtain appropriate formulae for the specific sounding methods. To illustrate the concept we provide such formulae for a number of EM techniques: geomagnetic depth sounding (GDS), conventional and generalized magnetotellurics, the magnetovariational method, horizontal

  1. Frequency and origins of hemoglobin S mutation in African-derived Brazilian populations.

    PubMed

    De Mello Auricchio, Maria Teresa Balester; Vicente, João Pedro; Meyer, Diogo; Mingroni-Netto, Regina Célia

    2007-12-01

    Africans arrived in Brazil as slaves in great numbers, mainly after 1550. Before the abolition of slavery in Brazil in 1888, many communities, called quilombos, were formed by runaway or abandoned African slaves. These communities are presently referred to as remnants of quilombos, and many are still partially genetically isolated. These remnants can be regarded as relicts of the original African genetic contribution to the Brazilian population. In this study we assessed frequencies and probable geographic origins of hemoglobin S (HBB*S) mutations in remnants of quilombo populations in the Ribeira River valley, São Paulo, Brazil, to reconstruct the history of African-derived populations in the region. We screened for HBB*S mutations in 11 quilombo populations (1,058 samples) and found HBB*S carrier frequencies that ranged from 0% to 14%. We analyzed beta-globin gene cluster haplotypes linked to the HBB*S mutation in 86 chromosomes and found the four known African haplotypes: 70 (81.4%) Bantu (Central Africa Republic), 7 (8.1%) Benin, 7 (8.1%) Senegal, and 2 (2.3%) Cameroon haplotypes. One sickle cell homozygote was Bantu/Bantu and two homozygotes had Bantu/Benin combinations. The high frequency of the sickle cell trait and the diversity of HBB*S linked haplotypes indicate that Brazilian remnants of quilombos are interesting repositories of genetic diversity present in the ancestral African populations.

  2. Errors in the estimation of approximate entropy and other recurrence-plot-derived indices due to the finite resolution of RR time series.

    PubMed

    García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan

    2009-02-01

    An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.

  3. The effect of baseline pressure errors on an intracranial pressure-derived index: results of a prospective observational study

    PubMed Central

    2014-01-01

    Background In order to characterize the intracranial pressure-volume reserve capacity, the correlation coefficient (R) between the ICP wave amplitude (A) and the mean ICP level (P), the RAP index, has been used to improve the diagnostic value of ICP monitoring. Baseline pressure errors (BPEs), caused by spontaneous shifts or drifts in baseline pressure, cause erroneous readings of mean ICP. Consequently, BPEs could also affect ICP indices such as the RAP where in the mean ICP is incorporated. Methods A prospective, observational study was carried out on patients with aneurysmal subarachnoid hemorrhage (aSAH) undergoing ICP monitoring as part of their surveillance. Via the same burr hole in the scull, two separate ICP sensors were placed close to each other. For each consecutive 6-sec time window, the dynamic mean ICP wave amplitude (MWA; measure of the amplitude of the single pressure waves) and the static mean ICP, were computed. The RAP index was computed as the Pearson correlation coefficient between the MWA and the mean ICP for 40 6-sec time windows, i.e. every subsequent 4-min period (method 1). We compared this approach with a method of calculating RAP using a 4-min moving window updated every 6 seconds (method 2). Results The study included 16 aSAH patients. We compared 43,653 4-min RAP observations of signals 1 and 2 (method 1), and 1,727,000 6-sec RAP observations (method 2). The two methods of calculating RAP produced similar results. Differences in RAP ≥0.4 in at least 7% of observations were seen in 5/16 (31%) patients. Moreover, the combination of a RAP of ≥0.6 in one signal and <0.6 in the other was seen in ≥13% of RAP-observations in 4/16 (25%) patients, and in ≥8% in another 4/16 (25%) patients. The frequency of differences in RAP >0.2 was significantly associated with the frequency of BPEs (5 mmHg ≤ BPE <10 mmHg). Conclusions Simultaneous monitoring from two separate, close-by ICP sensors reveals significant differences in RAP that

  4. Effect of norepinephrine dosage and calibration frequency on accuracy of pulse contour-derived cardiac output

    PubMed Central

    2011-01-01

    Introduction Continuous cardiac output monitoring is used for early detection of hemodynamic instability and guidance of therapy in critically ill patients. Recently, the accuracy of pulse contour-derived cardiac output (PCCO) has been questioned in different clinical situations. In this study, we examined agreement between PCCO and transcardiopulmonary thermodilution cardiac output (COTCP) in critically ill patients, with special emphasis on norepinephrine (NE) administration and the time interval between calibrations. Methods This prospective, observational study was performed with a sample of 73 patients (mean age, 63 ± 13 years) requiring invasive hemodynamic monitoring on a non-cardiac surgery intensive care unit. PCCO was recorded immediately before calibration by COTCP. Bland-Altman analysis was performed on data subsets comparing agreement between PCCO and COTCP according to NE dosage and the time interval between calibrations up to 24 hours. Further, central artery stiffness was calculated on the basis of the pulse pressure to stroke volume relationship. Results A total of 330 data pairs were analyzed. For all data pairs, the mean COTCP (±SD) was 8.2 ± 2.0 L/min. PCCO had a mean bias of 0.16 L/min with limits of agreement of -2.81 to 3.15 L/min (percentage error, 38%) when compared to COTCP. Whereas the bias between PCCO and COTCP was not significantly different between NE dosage categories or categories of time elapsed between calibrations, interchangeability (percentage error <30%) between methods was present only in the high NE dosage subgroup (≥0.1 μg/kg/min), as the percentage errors were 40%, 47% and 28% in the no NE, NE < 0.1 and NE ≥ 0.1 μg/kg/min subgroups, respectively. PCCO was not interchangeable with COTCP in subgroups of different calibration intervals. The high NE dosage group showed significantly increased central artery stiffness. Conclusions This study shows that NE dosage, but not the time interval between calibrations, has an

  5. Fast and robust population transfer in two-level quantum systems with dephasing noise and/or systematic frequency errors

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Jing; Chen, Xi; Ruschhaupt, A.; Alonso, D.; Guérin, S.; Muga, J. G.

    2013-09-01

    We design, by invariant-based inverse engineering, driving fields that invert the population of a two-level atom in a given time, robustly with respect to dephasing noise and/or systematic frequency shifts. Without imposing constraints, optimal protocols are insensitive to the perturbations but need an infinite energy. For a constrained value of the Rabi frequency, a flat π pulse is the least sensitive protocol to phase noise but not to systematic frequency shifts, for which we describe and optimize a family of protocols.

  6. Validation of slant delays derived from single and dual frequency GPS data

    NASA Astrophysics Data System (ADS)

    Deng, Z.; Dick, G.; Zus, F.; Ge, M.; Bender, M.; Wickert, J.

    2010-05-01

    Improved knowledge of the humidity distribution is very important for a variety of atmospheric research applications. During the last years the potential of GPS derived tropospheric products, e.g. zenith total delays (ZTD), slant total delays (STD), with high temporal resolution have been demonstrated. The spatial resolution depends on the network density, which needs to be improved for such meteorological applications, as high resolution numerical forecast models. Another application is the water vapor tomography, which can be used to resolve the spatial structure and temporal variations of the tropospheric water vapor. The GPS derived STDs are used here as input data. To reconstruct reliable vertical profiles, a large number of STD observations covering the complete region from a wide range of angels is required. Due to economic reasons, the network densification is recommended with single frequency (SF) receivers. The Satellite-specific Epoch-differenced Ionospheric Delay model (SEID) has been developed at the Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences to estimate ionospheric corrections for SF receivers embedded in networks of dual-frequency (DF) receivers. With these corrections the SF GPS data can be processed in the same way as the DF data. It has been proved, that the SEID model is sufficient for estimating tropospheric products as well as station coordinates from SF data. The easy implementation and the accuracy of the SEID may speed up the densification of existing networks with SF receivers. After introducing the SEID model, the validation results of SF and DF derived tropospheric products will be presented. Currently the very sparse character of independent observations makes it difficult to assess the anticipated high quality of DF & SF STD data processed for a large network of continuously operating receivers. Therefore monitoring of GPS derived STD data against weather analysis is an alternative. To compare STDs with their

  7. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  8. Topographic gravitational potential up to second-order derivatives: an examination of approximation errors caused by rock-equivalent topography (RET)

    NASA Astrophysics Data System (ADS)

    Kuhn, Michael; Hirt, Christian

    2016-09-01

    In gravity forward modelling, the concept of Rock-Equivalent Topography (RET) is often used to simplify the computation of gravity implied by rock, water, ice and other topographic masses. In the RET concept, topographic masses are compressed (approximated) into equivalent rock, allowing the use of a single constant mass-density value. Many studies acknowledge the approximate character of the RET, but few have attempted yet to quantify and analyse the approximation errors in detail for various gravity field functionals and heights of computation points. Here, we provide an in-depth examination of approximation errors associated with the RET compression for the topographic gravitational potential and its first- and second-order derivatives. Using the Earth2014 layered topography suite we apply Newtonian integration in the spatial domain in the variants (a) rigorous forward modelling of all mass bodies, (b) approximative modelling using RET. The differences among both variants, which reflect the RET approximation error, are formed and studied for an ensemble of 10 different gravity field functionals at three levels of altitude (on and 3 km above the Earth's surface and at 250 km satellite height). The approximation errors are found to be largest at the Earth's surface over RET compression areas (oceans, ice shields) and to increase for the first- and second-order derivatives. Relative errors, computed here as ratio between the range of differences between both variants relative to the range in signal, are at the level of 0.06-0.08 % for the potential, ˜ 3-7 % for the first-order derivatives at the Earth's surface (˜ 0.1 % at satellite altitude). For the second-order derivatives, relative errors are below 1 % at satellite altitude, at the 10-20 % level at 3 km and reach maximum values as large as ˜ 20 to 110 % near the surface. As such, the RET approximation errors may be acceptable for functionals computed far away from the Earth's surface or studies focussing on

  9. The Huygens Doppler Wind Experiment - Titan Winds Derived from Probe Radio Frequency Measurements

    NASA Astrophysics Data System (ADS)

    Bird, M. K.; Dutta-Roy, R.; Heyl, M.; Allison, M.; Asmar, S. W.; Folkner, W. M.; Preston, R. A.; Atkinson, D. H.; Edenhofer, P.; Plettemeier, D.; Wohlmuth, R.; Iess, L.; Tyler, G. L.

    2002-07-01

    A Doppler Wind Experiment (DWE) will be performed during the Titan atmospheric descent of the ESA Huygens Probe. The direction and strength of Titan's zonal winds will be determined with an accuracy better than 1 m s-1 from the start of mission at an altitude of ˜160 km down to the surface. The Probe's wind-induced horizontal motion will be derived from the residual Doppler shift of its S-band radio link to the Cassini Orbiter, corrected for all known orbit and propagation effects. It is also planned to record the frequency of the Probe signal using large ground-based antennas, thereby providing an additional component of the horizontal drift. In addition to the winds, DWE will obtain valuable information on the rotation, parachute swing and atmospheric buffeting of the Huygens Probe, as well as its position and attitude after Titan touchdown. The DWE measurement strategy relies on experimenter-supplied Ultra-Stable Oscillators to generate the transmitted signal from the Probe and to extract the frequency of the received signal on the Orbiter. Results of the first in-flight checkout, as well as the DWE Doppler calibrations conducted with simulated Huygens signals uplinked from ground (Probe Relay Tests), are described. Ongoing efforts to measure and model Titan's winds using various Earth-based techniques are briefly reviewed.

  10. The Huygens Doppler Wind Experiment - Titan Winds Derived from Probe Radio Frequency Measurements

    NASA Astrophysics Data System (ADS)

    Bird, M. K.; Dutta-Roy, R.; Heyl, M.; Allison, M.; Asmar, S. W.; Folkner, W. M.; Preston, R. A.; Atkinson, D. H.; Edenhofer, P.; Plettemeier, D.; Wohlmuth, R.; Iess, L.; Tyler, G. L.

    2002-07-01

    A Doppler Wind Experiment (DWE) will be performed during the Titan atmospheric descent of the ESA Huygens Probe. The direction and strength of Titan's zonal winds will be determined with an accuracy better than 1 m s-1 from the start of mission at an altitude of ~160 km down to the surface. The Probe's wind-induced horizontal motion will be derived from the residual Doppler shift of its S-band radio link to the Cassini Orbiter, corrected for all known orbit and propagation effects. It is also planned to record the frequency of the Probe signal using large ground-based antennas, thereby providing an additional component of the horizontal drift. In addition to the winds, DWE will obtain valuable information on the rotation, parachute swing and atmospheric buffeting of the Huygens Probe, as well as its position and attitude after Titan touchdown. The DWE measurement strategy relies on experimenter-supplied Ultra-Stable Oscillators to generate the transmitted signal from the Probe and to extract the frequency of the received signal on the Orbiter. Results of the first in-flight checkout, as well as the DWE Doppler calibrations conducted with simulated Huygens signals uplinked from ground (Probe Relay Tests), are described. Ongoing efforts to measure and model Titan's winds using various Earth-based techniques are briefly reviewed.

  11. Multi-frequency acoustic derivation of particle size using 'off-the-shelf" ADCPs.

    NASA Astrophysics Data System (ADS)

    Haught, D. R.; Wright, S. A.; Venditti, J. G.; Church, M. A.

    2015-12-01

    Suspended sediment particle size in rivers is of great interest due to its influence on riverine and coastal morphology, socio-economic viability, and ecological health and restoration. Prediction of suspended sediment transport from hydraulics remains a stubbornly difficult problem, particularly for the washload component, which is controlled by sediment supply from the drainage basin. This has led to a number of methods for continuously monitoring suspended sediment concentration and mean particle size, the most popular currently being hydroacoustic methods. Here, we explore the possibility of using theoretical inversion of the sonar equation to derive an estimate of mean particle size and standard deviation of the grain size distribution (GSD) using three 'off-the-shelf' acoustic Doppler current profiles (ADCP) with frequencies of 300, 600 and 1200 kHz. The instruments were deployed in the sand-bedded reach of the Fraser River, British Columbia. We use bottle samples collected in the acoustic beams to test acoustics signal inversion methods. Concentrations range from 15-300 mg/L and the suspended load at the site is ~25% sand, ~75 % silt/clay. Measured mean particle radius from samples ranged from 10-40 microns with relative standard deviations ranging from 0.75 to 2.5. Initial results indicate the acoustically derived mean particle radius compares well with measured particle radius, using a theoretical inversion method adapted to the Fraser River sediment.

  12. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  13. Impact of Primary Spherical Aberration, Spatial Frequency and Stiles Crawford Apodization on Wavefront determined Refractive Error: A Computational Study

    PubMed Central

    Xu, Renfeng; Bradley, Arthur; Thibos, Larry N.

    2013-01-01

    Purpose We tested the hypothesis that pupil apodization is the basis for central pupil bias of spherical refractions in eyes with spherical aberration. Methods We employed Fourier computational optics in which we vary spherical aberration levels, pupil size, and pupil apodization (Stiles Crawford Effect) within the pupil function, from which point spread functions and optical transfer functions were computed. Through-focus analysis determined the refractive correction that optimized retinal image quality. Results For a large pupil (7 mm), as spherical aberration levels increase, refractions that optimize the visual Strehl ratio mirror refractions that maximize high spatial frequency modulation in the image and both focus a near paraxial region of the pupil. These refractions are not affected by Stiles Crawford Effect apodization. Refractions that optimize low spatial frequency modulation come close to minimizing wavefront RMS, and vary with level of spherical aberration and Stiles Crawford Effect. In the presence of significant levels of spherical aberration (e.g. C40 = 0.4 µm, 7mm pupil), low spatial frequency refractions can induce −0.7D myopic shift compared to high SF refraction, and refractions that maximize image contrast of a 3 cycle per degree square-wave grating can cause −0.75D myopic drift relative to refractions that maximize image sharpness. Discussion Because of small depth of focus associated with high spatial frequency stimuli, the large change in dioptric power across the pupil caused by spherical aberration limits the effective aperture contributing to the image of high spatial frequencies. Thus, when imaging high spatial frequencies, spherical aberration effectively induces an annular aperture defining that portion of the pupil contributing to a well-focused image. As spherical focus is manipulated during the refraction procedure, the dimensions of the annular aperture change. Image quality is maximized when the inner radius of the induced

  14. Classification of radiological errors in chest radiographs, using support vector machine on the spatial frequency features of false- negative and false-positive regions

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.

    2011-03-01

    Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.

  15. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    PubMed Central

    Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  16. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    PubMed

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-04-18

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  17. Fluctuating neutron star magnetosphere: braking indices of eight pulsars, frequency second derivatives of 222 pulsars and 15 magnetars

    NASA Astrophysics Data System (ADS)

    Ou, Z. W.; Tong, H.; Kou, F. F.; Ding, G. Q.

    2016-04-01

    Eight pulsars have low braking indices, which challenge the magnetic dipole braking of pulsars. 222 pulsars and 15 magnetars have abnormal distribution of frequency second derivatives, which also make contradiction with classical understanding. How neutron star magnetospheric activities affect these two phenomena are investigated by using the wind braking model of pulsars. It is based on the observational evidence that pulsar timing is correlated with emission and both aspects reflect the magnetospheric activities. Fluctuations are unavoidable for a physical neutron star magnetosphere. Young pulsars have meaningful braking indices, while old pulsars' and magnetars' fluctuation item dominates their frequency second derivatives. It can explain both the braking index and frequency second derivative of pulsars uniformly. The braking indices of eight pulsars are the combined effect of magnetic dipole radiation and particle wind. During the lifetime of a pulsar, its braking index will evolve from three to one. Pulsars with low braking index may put strong constraint on the particle acceleration process in the neutron star magnetosphere. The effect of pulsar death should be considered during the long term rotational evolution of pulsars. An equation like the Langevin equation for Brownian motion was derived for pulsar spin-down. The fluctuation in the neutron star magnetosphere can be either periodic or random, which result in anomalous frequency second derivative and they have similar results. The magnetospheric activities of magnetars are always stronger than those of normal pulsars.

  18. A Derivation of the Dick Effect from Control-Loop Models for Periodically Interrogated Passive Frequency Standards

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles A.

    1996-01-01

    The phase of a frequency standard that uses periodic interrogation and control of a local oscillator (LO) is degraded by a long-term random-walk component induced by downconversion of LO noise into the loop passband. The Dick formula for the noise level of this degradation can be derived from explicit solotions of two LO control-loop models. A summary of the derivations is given here.

  19. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  20. Large Scale Parameter Estimation Problems in Frequency-Domain Elastodynamics Using an Error in Constitutive Equation Functional

    PubMed Central

    Banerjee, Biswanath; Walsh, Timothy F.; Aquino, Wilkins; Bonnet, Marc

    2012-01-01

    This paper presents the formulation and implementation of an Error in Constitutive Equations (ECE) method suitable for large-scale inverse identification of linear elastic material properties in the context of steady-state elastodynamics. In ECE-based methods, the inverse problem is postulated as an optimization problem in which the cost functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses. Furthermore, in a more recent modality of this methodology introduced by Feissel and Allix (2007), referred to as the Modified ECE (MECE), the measured data is incorporated into the formulation as a quadratic penalty term. We show that a simple and efficient continuation scheme for the penalty term, suggested by the theory of quadratic penalty methods, can significantly accelerate the convergence of the MECE algorithm. Furthermore, a (block) successive over-relaxation (SOR) technique is introduced, enabling the use of existing parallel finite element codes with minimal modification to solve the coupled system of equations that arises from the optimality conditions in MECE methods. Our numerical results demonstrate that the proposed methodology can successfully reconstruct the spatial distribution of elastic material parameters from partial and noisy measurements in as few as ten iterations in a 2D example and fifty in a 3D example. We show (through numerical experiments) that the proposed continuation scheme can improve the rate of convergence of MECE methods by at least an order of magnitude versus the alternative of using a fixed penalty parameter. Furthermore, the proposed block SOR strategy coupled with existing parallel solvers produces a computationally efficient MECE method that can be used for large scale materials identification problems, as demonstrated on a 3D example involving about 400,000 unknown moduli. Finally, our numerical results suggest that the proposed MECE

  1. A Derivation of the Long-Term Degradation of a Pulsed Atomic Frequency Standard from a Control-Loop Model

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1996-01-01

    The phase of a frequency standard that uses periodic interrogation and control of a local oscillator (LO) is degraded by a long-term random-walk component induced by downconversion of LO noise into the loop passband. The Dick formula for the noise level of this degradation is derived from an explicit solution of an LO control-loop model.

  2. Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles

    PubMed Central

    Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.

    2009-01-01

    In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956

  3. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  4. Reducing epistemic errors in water quality modelling through high-frequency data and stakeholder collaboration: the case of an industrial spill

    NASA Astrophysics Data System (ADS)

    Krueger, Tobias; Inman, Alex; Paling, Nick

    2014-05-01

    Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at

  5. An analysis of the effects of secondary reflections on dual-frequency reflectometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.; Cockrell, C. R.; Harrah, S. D.

    1990-01-01

    The error-producing mechanism involving secondary reflections in a dual-frequency, distance measuring reflectometer is examined analytically. Equations defining the phase, and hence distance, error are derived. The error-reducing potential of frequency-sweeping is demonstrated. It is shown that a single spurious return can be completely nullified by optimizing the sweep width.

  6. Errors of Omission in English-Speaking Children's Production of Plurals and the Past Tense: The Effects of Frequency, Phonology, and Competition.

    PubMed

    Matthews, Danielle E; Theakston, Anna L

    2006-11-12

    How do English-speaking children inflect nouns for plurality and verbs for the past tense? We assess theoretical answers to this question by considering errors of omission, which occur when children produce a stem in place of its inflected counterpart (e.g., saying "dress" to refer to 5 dresses). A total of 307 children (aged 3;11-9;9) participated in 3 inflection studies. In Study 1, we show that errors of omission occur until the age of 7 and are more likely with both sibilant regular nouns (e.g., dress) and irregular nouns (e.g., man) than regular nouns (e.g., dog). Sibilant nouns are more likely to be inflected if they are high frequency. In Studies 2 and 3, we show that similar effects apply to the inflection of verbs and that there is an advantage for "regular-like" irregulars whose inflected form, but not stem form, ends in d/t. The results imply that (a) stems and inflected forms compete for production and (b) children generalize both product-oriented and source-oriented schemas when learning about inflectional morphology.

  7. The Importance of Measurement Errors for Deriving Accurate Reference Leaf Area Index Maps for Validation of Moderate-Resolution Satellite LAI Products

    NASA Technical Reports Server (NTRS)

    Huang, Dong; Yang, Wenze; Tan, Bin; Rautiainen, Miina; Zhang, Ping; Hu, Jiannan; Shabanov, Nikolay V.; Linder, Sune; Knyazikhin, Yuri; Myneni, Ranga B.

    2006-01-01

    The validation of moderate-resolution satellite leaf area index (LAI) products such as those operationally generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data requires reference LAI maps developed from field LAI measurements and fine-resolution satellite data. Errors in field measurements and satellite data determine the accuracy of the reference LAI maps. This paper describes a method by which reference maps of known accuracy can be generated with knowledge of errors in fine-resolution satellite data. The method is demonstrated with data from an international field campaign in a boreal coniferous forest in northern Sweden, and Enhanced Thematic Mapper Plus images. The reference LAI map thus generated is used to assess modifications to the MODIS LAI/fPAR algorithm recently implemented to derive the next generation of the MODIS LAI/fPAR product for this important biome type.

  8. Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors

    NASA Technical Reports Server (NTRS)

    Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

    2007-01-01

    Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross

  9. Design and analysis of tilt integral derivative controller with filter for load frequency control of multi-area interconnected power systems.

    PubMed

    Kumar Sahu, Rabindra; Panda, Sidhartha; Biswal, Ashutosh; Chandra Sekhar, G T

    2016-03-01

    In this paper, a novel Tilt Integral Derivative controller with Filter (TIDF) is proposed for Load Frequency Control (LFC) of multi-area power systems. Initially, a two-area power system is considered and the parameters of the TIDF controller are optimized using Differential Evolution (DE) algorithm employing an Integral of Time multiplied Absolute Error (ITAE) criterion. The superiority of the proposed approach is demonstrated by comparing the results with some recently published heuristic approaches such as Firefly Algorithm (FA), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) optimized PID controllers for the same interconnected power system. Investigations reveal that proposed TIDF controllers provide better dynamic response compared to PID controller in terms of minimum undershoots and settling times of frequency as well as tie-line power deviations following a disturbance. The proposed approach is also extended to two widely used three area test systems considering nonlinearities such as Generation Rate Constraint (GRC) and Governor Dead Band (GDB). To improve the performance of the system, a Thyristor Controlled Series Compensator (TCSC) is also considered and the performance of TIDF controller in presence of TCSC is investigated. It is observed that system performance improves with the inclusion of TCSC. Finally, sensitivity analysis is carried out to test the robustness of the proposed controller by varying the system parameters, operating condition and load pattern. It is observed that the proposed controllers are robust and perform satisfactorily with variations in operating condition, system parameters and load pattern.

  10. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  11. Calculation of Frequency-Dependent Polarizabilities and Hyperpolarizabilities Based on the Quasienergy Derivative Method (Is the Numerical Approach Impossible?)

    SciTech Connect

    Sasagane, Kotoku

    2008-09-17

    The essence of the quasienergy derivative (QED) method and calculations of the frequency-dependent hyperpolarizabilities based on the QED method will be presented in the first half. Our recent and up-to-date development and some possibilities concerning the QED method will be explained later. At the end of the lecture whether the extension of the QED method to the numerical approach is possible or not will be investigated.

  12. Statistical Analysis of Instantaneous Frequency Scaling Factor as Derived from Optical Disdrometer Measurements at V/W Bands

    NASA Technical Reports Server (NTRS)

    Zemba, Michael; Nessel, James; Tarasenko, Nicholas; Lane, Steven

    2017-01-01

    Since October 2015, NASA Glenn Research Center (GRC) and the Air Force Research Laboratory (AFRL) have collaboratively operated an RF terrestrial link in Albuquerque, New Mexico to characterize atmospheric propagation phenomena at 72 and 84 GHz. The W/V-band Terrestrial Link Experiment (WTLE) consists of coherent transmitters at each frequency on the crest of the Sandia Mountains and a corresponding pair of receivers in south Albuquerque. The beacon receivers provide a direct measurement of the link attenuation, while concurrent weather instrumentation provides a measurement of the atmospheric conditions. Among the available weather instruments is an optical disdrometer which yields an optical measurement of rain rate, as well as droplet size and velocity distributions (DSD, DVD). In particular, the DSD can be used to derive an instantaneous scaling factor (ISF) by which the measured data at one frequency can be scaled to another - for example, scaling the 72 GHz to an expected 84 GHz timeseries. Given the availability of both the DSD prediction and the directly observed 84 GHz attenuation, WTLE is thus uniquely able assess DSD-derived instantaneous frequency scaling at the V/W-bands. Previous work along these lines has investigated the DSD-derived ISF at Ka and Q-band (20 GHz to 40 GHz) using a satellite beacon receiver experiment in Milan, Italy. This work will expand the investigation to terrestrial links in the V/W-bands, where the frequency scaling factor is lower and where the link is also much more sensitive to attenuation by rain, clouds, and other atmospheric effects.

  13. Applying GOES-derived fog frequency indices to water balance modeling for the Russian River Watershed, California

    NASA Astrophysics Data System (ADS)

    Torregrosa, A.; Flint, L. E.; Flint, A. L.; Peters, J.; Combs, C.

    2014-12-01

    Coastal fog modifies the hydrodynamic and thermodynamic properties of California watersheds with the greatest impact to ecosystem functioning during arid summer months. Lowered maximum temperatures resulting from inland penetration of marine fog are probably adequate to capture fog effects on thermal land surface characteristics however the hydrologic impact from lowered rates of evapotranspiration due to shade, fog drip, increased relative humidity, and other factors associated with fog events are more difficult to gauge. Fog products, such as those derived from National Weather Service Geostationary Operational Environmental Satellite (GOES) imagery, provide high frequency (up to 15 min) views of fog and low cloud cover and can potentially improve water balance models. Even slight improvements in water balance calculations can benefit urban water managers and agricultural irrigation. The high frequency of GOES output provides the opportunity to explore options for integrating fog frequency data into water balance models. This pilot project compares GOES-derived fog frequency intervals (6, 12 and 24 hour) to explore the most useful for water balance models and to develop model-relevant relationships between climatic and water balance variables. Seasonal diurnal thermal differences, plant ecophysiological processes, and phenology suggest that a day/night differentiation on a monthly basis may be adequate. To explore this hypothesis, we examined discharge data from stream gages and outputs from the USGS Basin Characterization Model for runoff, recharge, potential evapotranspiration, and actual evapotranspiration for the Russian River Watershed under low, medium, and high fog event conditions derived from hourly GOES imagery (1999-2009). We also differentiated fog events into daytime and nighttime versus a 24-hour compilation on a daily, monthly, and seasonal basis. Our data suggest that a daily time-step is required to adequately incorporate the hydrologic effect of

  14. Frequency-Dependent Material Damping Using Augmenting Thermodynamic Fields (ATF) with Fractional Time Derivatives

    DTIC Science & Technology

    1990-09-01

    field, z - (q + r)7T/2 a,, a2 - material parameter, relation of affinity to the augmenting thermodynamic field a 3 - coupling between two ATFs 0T...frequency range, coupled material constitutive ix relations are developed using the concept of augmenting thermodynamic fields, with non-integer...material, for which the relation of stress to strain is defined by Hooke’s law a = EE , where E is called the modulus of elasticity. For this

  15. Mapping the montane cloud forest of Taiwan using 12 year MODIS-derived ground fog frequency data

    PubMed Central

    Li, Ching-Feng; Thies, Boris; Chang, Shih-Chieh; Bendix, Jörg

    2017-01-01

    Up until now montane cloud forest (MCF) in Taiwan has only been mapped for selected areas of vegetation plots. This paper presents the first comprehensive map of MCF distribution for the entire island. For its creation, a Random Forest model was trained with vegetation plots from the National Vegetation Database of Taiwan that were classified as “MCF” or “non-MCF”. This model predicted the distribution of MCF from a raster data set of parameters derived from a digital elevation model (DEM), Landsat channels and texture measures derived from them as well as ground fog frequency data derived from the Moderate Resolution Imaging Spectroradiometer. While the DEM parameters and Landsat data predicted much of the cloud forest’s location, local deviations in the altitudinal distribution of MCF linked to the monsoonal influence as well as the Massenerhebung effect (causing MCF in atypically low altitudes) were only captured once fog frequency data was included. Therefore, our study suggests that ground fog data are most useful for accurately mapping MCF. PMID:28245279

  16. Mapping the montane cloud forest of Taiwan using 12 year MODIS-derived ground fog frequency data.

    PubMed

    Schulz, Hans Martin; Li, Ching-Feng; Thies, Boris; Chang, Shih-Chieh; Bendix, Jörg

    2017-01-01

    Up until now montane cloud forest (MCF) in Taiwan has only been mapped for selected areas of vegetation plots. This paper presents the first comprehensive map of MCF distribution for the entire island. For its creation, a Random Forest model was trained with vegetation plots from the National Vegetation Database of Taiwan that were classified as "MCF" or "non-MCF". This model predicted the distribution of MCF from a raster data set of parameters derived from a digital elevation model (DEM), Landsat channels and texture measures derived from them as well as ground fog frequency data derived from the Moderate Resolution Imaging Spectroradiometer. While the DEM parameters and Landsat data predicted much of the cloud forest's location, local deviations in the altitudinal distribution of MCF linked to the monsoonal influence as well as the Massenerhebung effect (causing MCF in atypically low altitudes) were only captured once fog frequency data was included. Therefore, our study suggests that ground fog data are most useful for accurately mapping MCF.

  17. CORRELATED ERRORS IN EARTH POINTING MISSIONS

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve; Patt, Frederick S.

    2005-01-01

    Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor

  18. DERIVATION OF THE MAGNETIC FIELD IN A CORONAL MASS EJECTION CORE VIA MULTI-FREQUENCY RADIO IMAGING

    SciTech Connect

    Tun, Samuel D.; Vourlidas, A.

    2013-04-01

    The magnetic field within the core of a coronal mass ejection (CME) on 2010 August 14 is derived from analysis of multi-wavelength radio imaging data. This CME's core was found to be the source of a moving type IV radio burst, whose emission is here determined to arise from the gyrosynchrotron process. The CME core's true trajectory, electron density, and line-of-sight depth are derived from stereoscopic observations, constraining these parameters in the radio emission models. We find that the CME carries a substantial amount of mildly relativistic electrons (E < 100 keV) in a strong magnetic field (B < 15 G), and that the spectra at lower heights are preferentially suppressed at lower frequencies through absorption from thermal electrons. We discuss the results in light of previous moving type IV burst studies, and outline a plan for the eventual use of radio methods for CME magnetic field diagnostics.

  19. Determination of lateral-stability derivatives and transfer-function coefficients from frequency-response data for lateral motions

    NASA Technical Reports Server (NTRS)

    Donegan, James J; Robinson, Samuel W , Jr; Gates, Ordway, B , jr

    1955-01-01

    A method is presented for determining the lateral-stability derivatives, transfer-function coefficients, and the modes for lateral motion from frequency-response data for a rigid aircraft. The method is based on the application of the vector technique to the equations of lateral motion, so that the three equations of lateral motion can be separated into six equations. The method of least squares is then applied to the data for each of these equations to yield the coefficients of the equations of lateral motion from which the lateral-stability derivatives and lateral transfer-function coefficients are computed. Two numerical examples are given to demonstrate the use of the method.

  20. The Use of Multi-Sensor Quantitative Precipitation Estimates for Deriving Extreme Precipitation Frequencies with Application in Louisiana

    NASA Astrophysics Data System (ADS)

    El-Dardiry, Hisham Abd El-Kareem

    The Radar-based Quantitative Precipitation Estimates (QPE) is one of the NEXRAD products that are available in a high temporal and spatial resolution compared with gauges. Radar-based QPEs have been widely used in many hydrological and meteorological applications; however, a few studies have focused on using radar QPE products in deriving of Precipitation Frequency Estimates (PFE). Accurate and regionally specific information on PFE is critically needed for various water resources engineering planning and design purposes. This study focused first on examining the data quality of two main radar products, the near real-time Stage IV QPE product, and the post real-time RFC/MPE product. Assessment of the Stage IV product showed some alarming data artifacts that contaminate the identification of rainfall maxima. Based on the inter-comparison analysis of the two products, Stage IV and RFC/MPE, the latter was selected for the frequency analysis carried out throughout the study. The precipitation frequency analysis approach used in this study is based on fitting Generalized Extreme Value (GEV) distribution as a statistical model for the hydrologic extreme rainfall data that based on Annual Maximum Series (AMS) extracted from 11 years (2002-2012) over a domain covering Louisiana. The parameters of the GEV model are estimated using method of linear moments (L-moments). Two different approaches are suggested for estimating the precipitation frequencies; Pixel-Based approach, in which PFEs are estimated at each individual pixel and Region-Based approach in which a synthetic sample is generated at each pixel by using observations from surrounding pixels. The region-based technique outperforms the pixel based estimation when compared with results obtained by NOAA Atlas 14; however, the availability of only short record of observations and the underestimation of radar QPE for some extremes causes considerable reduction in precipitation frequencies in pixel-based and region

  1. Technical note: Calculation of standard errors of estimates of genetic parameters with the multiple-trait derivative-free restricted maximal likelihood programs.

    PubMed

    Kachman, S D; Van Vleck, L D

    2007-10-01

    The multiple-trait derivative-free REML set of programs was written to handle partially missing data for multiple-trait analyses as well as single-trait models. Standard errors of genetic parameters were reported for univariate models and for multiple-trait analyses only when all traits were measured on animals with records. In addition to estimating (co)variance components for multiple-trait models with partially missing data, this paper shows how the multiple-trait derivative-free REML set of programs can also estimate SE by augmenting the data file when not all animals have all traits measured. Although the standard practice has been to eliminate records with partially missing data, that practice uses only a subset of the available data. In some situations, the elimination of partial records can result in elimination of all the records, such as one trait measured in one environment and a second trait measured in a different environment. An alternative approach requiring minor modifications of the original data and model was developed that provides estimates of the SE using an augmented data set that gives the same residual log likelihood as the original data for multiple-trait analyses when not all traits are measured. Because the same residual vector is used for the original data and the augmented data, the resulting REML estimators along with their sampling properties are identical for the original and augmented data, so that SE for estimates of genetic parameters can be calculated.

  2. Shallow water sediment properties derived from high-frequency shear and interface waves

    NASA Astrophysics Data System (ADS)

    Ewing, John; Carter, Jerry A.; Sutton, George H.; Barstow, Noel

    1992-04-01

    Low-frequency sound propagation in shallow water environments is not restricted to the water column but also involves the subbottom. Thus, as well as being important for geophysical description of the seabed, subbottom velocity/attenuation structure is essential input for predictive propagation models. To estimate this structure, bottom-mounted sources and receivers were used to make measurements of shear and compressional wave propagation in shallow water sediments of the continental shelf, usually where boreholes and high-resolution reflection profiles give substantial supporting geologic information about the subsurface. This colocation provides an opportunity to compare seismically determined estimates of physical properties of the seabed with the "ground truth" properties. Measurements were made in 1986 with source/detector offsets up to 200 m producing shear wave velocity versus depth profiles of the upper 30-50 m of the seabed (and P wave profiles to lesser depths). Measurements in 1988 were made with smaller source devices designed to emphasize higher frequencies and recorded by an array of 30 sensors spaced at 1-m intervals to improve spatial sampling and resolution of shallow structure. These investigations with shear waves have shown that significant lateral and vertical variations in the physical properties of the shallow seabed are common and are principally created by erosional and depositional processes associated with glacial cycles and sea level oscillations during the Quaternary. When the seabed structure is relatively uniform over the length of the profiles, the shear wave fields are well ordered, and the matching of the data with full waveform synthetics has been successful, producing velocity/attenuation models consistent with the subsurface lithology indicated by coring results. Both body waves and interface waves have been modeled for velocity/attenuation as a function of depth with the aid of synthetic seismograms and other analytical

  3. Deriving Lifetime Maps in the Time/Frequency Domain of Coherent Structures in the Turbulent Boundary Layer

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan

    2008-01-01

    The lifetimes of coherent structures are derived from data correlated over a 3 sensor array sampling streamwise sidewall pressure at high Reynolds number (> 10(exp 8)). The data were acquired at subsonic, transonic and supersonic speeds aboard a Tupolev Tu-144. The lifetimes are computed from a variant of the correlation length termed the lifelength. Characteristic lifelengths are estimated by fitting a Gaussian distribution to the sensors cross spectra and are shown to compare favorably with Efimtsov s prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distributions, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data are converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize the behavior of coherent structures in the turbulent boundary layer.

  4. Upper Ocean Salinity Stratification in the Tropics As Derived from the Buoyancy Frequency N2

    NASA Astrophysics Data System (ADS)

    Maes, C.; O'Kane, T.

    2014-12-01

    The idea that salinity contributes to ocean dynamics is simply common sense in physical oceanography. Along with temperature, salinity determines the ocean mass and hence, through geostrophy, influences ocean dynamics and currents. But, in the Tropics, salinity effects have generally been neglected. Nevertheless, observational studies of the western Pacific Ocean have suggested since the mid-1980s that the barrier layer resulting from the ocean salinity stratification within the mixed layer could influence significantly the ocean-atmosphere interactions. The present study aims to isolate the specific role of the salinity stratification in the layers above the main pycnocline by taking into account the respective thermal and saline dependencies in the Brunt-Vaisala frequency, N2. Results will show that the haline stabilizing effect may contribute to 40-50% in N2 as compared to the thermal stratification and, in some specific regions, exceeds it for a few months of the seasonal cycle. At the contrary, the centers of action of the subtropical gyres are characterized by the permanent absence of such effect. The relationships between the stabilizing effect and the sea surface fields such as SSS and SST are shown to be well defined and quasilinear in the Tropics, providing some indication that in the future, analyses that consider both satellite surface salinity measurements at the surface and vertical profiles at depth will result in a better determination of the role of the salinity stratification in climate prediction systems.

  5. Use of radar QPE for the derivation of Intensity-Duration-Frequency curves in a range of climatic regimes

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat

    2015-12-01

    Intensity-Duration-Frequency (IDF) curves are widely used in flood risk management because they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. Weather radars provide distributed rainfall estimates with high spatial and temporal resolutions and overcome the scarce representativeness of point-based rainfall for regions characterized by large gradients in rainfall climatology. This work explores the use of radar quantitative precipitation estimation (QPE) for the identification of IDF curves over a region with steep climatic transitions (Israel) using a unique radar data record (23 yr) and combined physical and empirical adjustment of the radar data. IDF relationships were derived by fitting a generalized extreme value distribution to the annual maximum series for durations of 20 min, 1 h and 4 h. Arid, semi-arid and Mediterranean climates were explored using 14 study cases. IDF curves derived from the study rain gauges were compared to those derived from radar and from nearby rain gauges characterized by similar climatology, taking into account the uncertainty linked with the fitting technique. Radar annual maxima and IDF curves were generally overestimated but in 70% of the cases (60% for a 100 yr return period), they lay within the rain gauge IDF confidence intervals. Overestimation tended to increase with return period, and this effect was enhanced in arid climates. This was mainly associated with radar estimation uncertainty, even if other effects, such as rain gauge temporal resolution, cannot be neglected. Climatological classification remained meaningful for the analysis of rainfall extremes and radar was able to discern climatology from rainfall frequency analysis.

  6. Middle atmosphere slant-path optical turbulence conditions derived from very high-frequency radar observations

    NASA Astrophysics Data System (ADS)

    Eaton, Frank D.; Nastrom, Gregory D.; Hansen, Anthony R.

    1999-02-01

    Slant path calculations are shown of the transverse coherence length (r0), the isoplanatic angle ((theta) 0), and the Rytov variance ((sigma) 2R), using a 6- yr data set of refractive index structure parameter (C2n) from 49.25-MHz radar observations at White Sands Missile Range, New Mexico. The calculations are for a spherical wave condition; a wavelength ((lambda) ) of electromagnetic radiation of 1 micrometers ; four different elevation angles (3, 10, 30, and 60 deg), two path lengths (50 and 150 km); and a platform, such as an aircraft, at 12.5 km MSL (mean sea level). Over 281,000 radar-derived C2n profiles sampled at 3 min intervals with 150-m height resolution are used for the calculations. The approach, an `onion skin' model, assumes horizontal stationarity over each entire propagation path and is consistent with Taylor's hypothesis. The results show that refractivity turbulence effects are greatly reduced for the there propagation parameters (r0, (theta) 0, and (sigma) 2R) as the elevation angle increases from 3 to 60 deg. A pronounced seasonal effect is seen on the same parameters, which is consistent with climatological variables and gravity wave activity. Interactions with the enhanced turbulence in the vicinity of the tropopause with the range weighting functions of each propagation parameter is evaluated. Results of a two region model relating r0, (theta) 0, and (sigma) 2R to wind speed at 5.6 km MSL are shown. This statistical model can be understood in terms of upward propagating gravity waves that are launched by strong winds over complex terrain.

  7. Fabrication and characterization of micromachined high-frequency tonpilz transducers derived by PZT thick films.

    PubMed

    Zhou, Qifa; Cannata, Jonathan M; Meyer, Richard J; van Tol, David J; Tadigadapa, Srinivas; Hughes, W Jack; Shung, K Kirk; Trolier-McKinstry, Susan

    2005-03-01

    Miniaturized tonpilz transducers are potentially useful for ultrasonic imaging in the 10 to 100 MHz frequency range due to their higher efficiency and output capabilities. In this work, 4 to 10-microm thick piezoelectric thin films were used as the active element in the construction of miniaturized tonpilz structures. The tonpilz stack consisted of silver/lead zirconate titanate (PZT)/lanthanum nickelate (LaNiO3)/silicon on insulator (SOI) substrates. First, conductive LaNiO3 thin films, approximately 300 nm in thickness, were grown on SOI substrates by a metalorganic decomposition (MOD) method. The room temperature resistivity of the LaNiO3 was 6.5 x 10(-6) omega x m. Randomly oriented PZT (52/48) films up to 7-microm thick were then deposited using a sol-gel process on the LaNiO3-coated SOI substrates. The PZT films with LaNiO3 bottom electrodes showed good dielectric and ferroelectric properties. The relative dielectric permittivity (at 1 kHz) was about 1030. The remanent polarization of PZT films was larger than 26 microC/cm2. The effective transverse piezoelectric e31,f coefficient of PZT thick films was about -6.5 C/m2 when poled at -75 kV/cm for 15 minutes at room temperature. Enhanced piezoelectric properties were obtained on poling the PZT films at higher temperatures. A silver layer about 40-microm thick was prepared by silver powder dispersed in epoxy and deposited onto the PZT film to form the tail mass of the tonpilz structure. The top layers of this wafer were subsequently diced with a saw, and the structure was bonded to a second wafer. The original silicon carrier wafer was polished and etched using a Xenon difluoride (XeF2) etching system. The resulting structures showed good piezoelectric activity. This process flow should enable integration of the piezoelectric elements with drive/receive electronics.

  8. First dynamic model of dissolved organic carbon derived directly from high-frequency observations through contiguous storms.

    PubMed

    Jones, Timothy D; Chappell, Nick A; Tych, Wlodek

    2014-11-18

    The first dynamic model of dissolved organic carbon (DOC) export in streams derived directly from high frequency (subhourly) observations sampled at a regular interval through contiguous storms is presented. The optimal model, identified using the recently developed RIVC algorithm, captured the rapid dynamics of DOC load from 15 min monitored rainfall with high simulation efficiencies and constrained uncertainty with a second-order (two-pathway) structure. Most of the DOC export in the four headwater basins studied was associated with the faster hydrometric pathway (also modeled in parallel), and was soon exhausted in the slower pathway. A delay in the DOC mobilization became apparent as the ambient temperatures increased. These features of the component pathways were quantified in the dynamic response characteristics (DRCs) identified by RIVC. The model and associated DRCs are intended as a foundation for a better understanding of storm-related DOC dynamics and predictability, given the increasing availability of subhourly DOC concentration data.

  9. Relation of Cloud Occurrence Frequency, Overlap, and Effective Thickness Derived from CALIPSO and CloudSat Merged Cloud Vertical Profiles

    NASA Technical Reports Server (NTRS)

    Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.

    2009-01-01

    A cloud frequency of occurrence matrix is generated using merged cloud vertical profile derived from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and Cloud Profiling Radar (CPR). The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical pro les can be related by a set of equations when the correlation distance of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches the random overlap with increasing distance separating cloud layers and that the probability of deviating from the random overlap decreases exponentially with distance. One month of CALIPSO and CloudSat data support these assumptions. However, the correlation distance sometimes becomes large, which might be an indication of precipitation. The cloud correlation distance is equivalent to the de-correlation distance introduced by Hogan and Illingworth [2000] when cloud fractions of both layers in a two-cloud layer system are the same.

  10. Derivation and Analysis of Viscoelastic Properties in Human Liver: Impact of Frequency on Fibrosis and Steatosis Staging

    PubMed Central

    Nightingale, Kathryn R.; Rouze, Ned C.; Rosenzweig, Stephen J.; Wang, Michael H.; Abdelmalek, Manal F.; Guy, Cynthia D.; Palmeri, Mark L.

    2015-01-01

    Commercially-available shear wave imaging systems measure group shear wave speed (SWS) and often report stiffness parameters applying purely elastic material models. Soft tissues, however, are viscoelastic, and higher-order material models are necessary to characterize the dispersion associated with broadband shearwaves. In this paper, we describe a robust, model-based algorithm and use a linear dispersion model to perform shearwave dispersion analysis in traditionally “difficult-to-image” subjects. In a cohort of 135 Non-Alcoholic Fatty Liver Disease patients, we compare the performance of group SWS with dispersion analysis-derived phase velocity c(200 Hz) and dispersion slope dc/df parameters to stage hepatic fibrosis and steatosis. AUROC analysis demonstrates correlation between all parameters (group SWS, c(200 Hz), and, to a lesser extent dc/df) and fibrosis stage, while no correlation was observed between steatosis stage and any of the material parameters. Interestingly, optimal AUROC threshold SWS values separating advanced liver fibrosis (≥F3) from mild-to-moderate fibrosis (≤F2) were shown to be frequency dependent, and to increase from 1.8 to 3.3 m/s over the 0–400 Hz shearwave frequency range. PMID:25585400

  11. Remote Sensing Derived Fire Frequency, Soil Moisture and Ecosystem Productivity Explain Regional Movements in Emu over Australia

    PubMed Central

    Madani, Nima; Kimball, John S.; Nazeri, Mona; Kumar, Lalit; Affleck, David L. R.

    2016-01-01

    Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m-3) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species’ ecological habitat niche across Australia. PMID:26799732

  12. Remote Sensing Derived Fire Frequency, Soil Moisture and Ecosystem Productivity Explain Regional Movements in Emu over Australia.

    PubMed

    Madani, Nima; Kimball, John S; Nazeri, Mona; Kumar, Lalit; Affleck, David L R

    2016-01-01

    Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m(-3)) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species' ecological habitat niche across Australia.

  13. Extremely low-frequency electromagnetic field influences the survival and proliferation effect of human adipose derived stem cells

    PubMed Central

    Razavi, Shahnaz; Salimi, Marzieh; Shahbazi-Gahrouei, Daryoush; Karbasi, Saeed; Kermani, Saeed

    2014-01-01

    Background: Extremely low-frequency electromagnetic fields (ELF-EMF) can effect on biological systems and alters some cell functions like proliferation rate. Therefore, we aimed to attempt the evaluation effect of ELF-EMF on the growth of human adipose derived stem cells (hADSCs). Materials and Methods: ELF-EMF was generated by a system including autotransformer, multi-meter, solenoid coils, teslameter and its probe. We assessed the effect of ELF-EMF with intensity of 0.5 and 1 mT and power line frequency 50 Hz on the survival of hADSCs for 20 and 40 min/day for 7 days by MTT assay. One-way analysis of variance was used to assessment the significant differences in groups. Results: ELF-EMF has maximum effect with intensity of 1 mT for 20 min/day on proliferation of hADSCs. The survival and proliferation effect (PE) in all exposure groups were significantly higher than that in sham groups (P < 0.05) except in group of 1 mT and 40 min/day. Conclusion: Our results show that between 0.5 m and 1 mT ELF-EMF could be enhances survival and PE of hADSCs conserving the duration of exposure. PMID:24592372

  14. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  15. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  16. Tunable error-free optical frequency conversion of a 4ps optical short pulse over 25 nm by four-wave mixing in a polarisation-maintaining optical fibre

    NASA Astrophysics Data System (ADS)

    Morioka, T.; Kawanishi, S.; Saruwatari, M.

    1994-05-01

    Error-free, tunable optical frequency conversion of a transform-limited 4.0 ps optical pulse signalis demonstrated at 6.3 Gbit/s using four-wave mixing in a polarization-maintaining optical fibre. The process generates 4.0-4.6 ps pulses over a 25nm range with time-bandwidth products of 0.31-0.43 and conversion power penalties of less than 1.5 dB.

  17. Error Patterns in Problem Solving.

    ERIC Educational Resources Information Center

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  18. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  19. Modeling the evolution and distribution of the frequency's second derivative and the braking index of pulsar spin

    NASA Astrophysics Data System (ADS)

    Xie, Yi; Zhang, Shuang-Nan; Liao, Jin-Yuan

    2015-07-01

    We model the evolution of the spin frequency's second derivative v̈ and the braking index n of radio pulsars with simulations within the phenomenological model of their surface magnetic field evolution, which contains a long-term power-law decay modulated by short-term oscillations. For the pulsar PSR B0329+54, a model with three oscillation components can reproduce its v̈ variation. We show that the “averaged” n is different from the instantaneous n, and its oscillation magnitude decreases abruptly as the time span increases, due to the “averaging” effect. The simulated timing residuals agree with the main features of the reported data. Our model predicts that the averaged v̈ of PSR B0329+54 will start to decrease rapidly with newer data beyond those used in Hobbs et al. We further perform Monte Carlo simulations for the distribution of the reported data in |v̈| and |n| versus characteristic age τC diagrams. It is found that the magnetic field oscillation model with decay index α = 0 can reproduce the distributions quite well. Compared with magnetic field decay due to the ambipolar diffusion (α = 0.5) and the Hall cascade (α = 1.0), the model with no long term decay (α = 0) is clearly preferred for old pulsars by the p-values of the two-dimensional Kolmogorov-Smirnov test. Supported by the National Natural Science Foundation of China.

  20. Financial errors in dementia: testing a neuroeconomic conceptual framework.

    PubMed

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L; Rosen, Howard J

    2014-08-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer's disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p < 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p < 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention.

  1. Medication Errors

    MedlinePlus

    ... common links HHS U.S. Department of Health and Human Services U.S. Food and Drug Administration A to Z Index Follow ... Practices National Patient Safety Foundation To Err is Human: ... Errors: Quality Chasm Series National Coordinating Council for Medication Error ...

  2. Language comprehension errors: A further investigation

    NASA Astrophysics Data System (ADS)

    Clarkson, Philip C.

    1991-06-01

    Comprehension errors made when attempting mathematical word problems have been noted as one of the high frequency categories in error analysis. This error category has been assumed to be language based. The study reported here provides some support for the linkage of comprehension errors to measures of language competency. Further, there is evidence that the frequency of such errors is related to competency in both the mother tongue and the language of instruction for bilingual students.

  3. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  4. Kiwifruit-derived supplements increase stool frequency in healthy adults: a randomized, double-blind, placebo-controlled study.

    PubMed

    Ansell, Juliet; Butts, Christine A; Paturi, Gunaranjan; Eady, Sarah L; Wallace, Alison J; Hedderley, Duncan; Gearry, Richard B

    2015-05-01

    The worldwide growth in the incidence of gastrointestinal disorders has created an immediate need to identify safe and effective interventions. In this randomized, double-blind, placebo-controlled study, we examined the effects of Actazin and Gold, kiwifruit-derived nutritional ingredients, on stool frequency, stool form, and gastrointestinal comfort in healthy and functionally constipated (Rome III criteria for C3 functional constipation) individuals. Using a crossover design, all participants consumed all 4 dietary interventions (Placebo, Actazin low dose [Actazin-L] [600 mg/day], Actazin high dose [Actazin-H] [2400 mg/day], and Gold [2400 mg/day]). Each intervention was taken for 28 days followed by a 14-day washout period between interventions. Participants recorded their daily bowel movements and well-being parameters in daily questionnaires. In the healthy cohort (n = 19), the Actazin-H (P = .014) and Gold (P = .009) interventions significantly increased the mean daily bowel movements compared with the washout. No significant differences were observed in stool form as determined by use of the Bristol stool scale. In a subgroup analysis of responders in the healthy cohort, Actazin-L (P = .005), Actazin-H (P < .001), and Gold (P = .001) consumption significantly increased the number of daily bowel movements by greater than 1 bowel movement per week. In the functionally constipated cohort (n = 9), there were no significant differences between interventions for bowel movements and the Bristol stool scale values or in the subsequent subgroup analysis of responders. This study demonstrated that Actazin and Gold produced clinically meaningful increases in bowel movements in healthy individuals.

  5. Sensitivity of tissue properties derived from MRgFUS temperature data to input errors and data inclusion criteria: ex vivo study in porcine muscle

    NASA Astrophysics Data System (ADS)

    Shi, Y. C.; Parker, D. L.; Dillon, C. R.

    2016-08-01

    This study evaluates the sensitivity of two magnetic resonance-guided focused ultrasound (MRgFUS) thermal property estimation methods to errors in required inputs and different data inclusion criteria. Using ex vivo pork muscle MRgFUS data, sensitivities to required inputs are determined by introducing errors to ultrasound beam locations (r error  =  -2 to 2 mm) and time vectors (t error  =  -2.2 to 2.2 s). In addition, the sensitivity to user-defined data inclusion criteria is evaluated by choosing different spatial (r fit  =  1-10 mm) and temporal (t fit  =  8.8-61.6 s) regions for fitting. Beam location errors resulted in up to 50% change in property estimates with local minima occurring at r error  =  0 and estimate errors less than 10% when r error  <  0.5 mm. Errors in the time vector led to property estimate errors up to 40% and without local minimum, indicating the need to trigger ultrasound sonications with the MR image acquisition. Regarding the selection of data inclusion criteria, property estimates reached stable values (less than 5% change) when r fit  >  2.5  ×  FWHM, and were most accurate with the least variability for longer t fit. Guidelines provided by this study highlight the importance of identifying required inputs and choosing appropriate data inclusion criteria for robust and accurate thermal property estimation. Applying these guidelines will prevent the introduction of biases and avoidable errors when utilizing these property estimation techniques for MRgFUS thermal modeling applications.

  6. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  7. Correcting numerical integration errors caused by small aliasing errors

    SciTech Connect

    Smallwood, D.O.

    1997-11-01

    Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

  8. Technical Note: Calculation of standard errors of estimates of genetic parameters with the multiple-trait derivative-free restricted maximal likelihood programs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The MTDFREML (Boldman et al., 1995) set of programs was written to handle partially missing data in an expedient manner. When estimating (co)variance components and genetic parameters for multiple trait models, the programs have not been able to estimate standard errors of those estimates for multi...

  9. Low-frequency, low-magnitude vibrations (LFLM) enhances chondrogenic differentiation potential of human adipose derived mesenchymal stromal stem cells (hASCs)

    PubMed Central

    Lewandowski, Daniel; Tomaszewski, Krzysztof A.; Henry, Brandon M.; Golec, Edward B.; Marędziak, Monika

    2016-01-01

    The aim of this study was to evaluate if low-frequency, low-magnitude vibrations (LFLM) could enhance chondrogenic differentiation potential of human adipose derived mesenchymal stem cells (hASCs) with simultaneous inhibition of their adipogenic properties for biomedical purposes. We developed a prototype device that induces low-magnitude (0.3 g) low-frequency vibrations with the following frequencies: 25, 35 and 45 Hz. Afterwards, we used human adipose derived mesenchymal stem cell (hASCS), to investigate their cellular response to the mechanical signals. We have also evaluated hASCs morphological and proliferative activity changes in response to each frequency. Induction of chondrogenesis in hASCs, under the influence of a 35 Hz signal leads to most effective and stable cartilaginous tissue formation through highest secretion of Bone Morphogenetic Protein 2 (BMP-2), and Collagen type II, with low concentration of Collagen type I. These results correlated well with appropriate gene expression level. Simultaneously, we observed significant up-regulation of α3, α4, β1 and β3 integrins in chondroblast progenitor cells treated with 35 Hz vibrations, as well as Sox-9. Interestingly, we noticed that application of 35 Hz frequencies significantly inhibited adipogenesis of hASCs. The obtained results suggest that application of LFLM vibrations together with stem cell therapy might be a promising tool in cartilage regeneration. PMID:26966645

  10. Low-frequency, low-magnitude vibrations (LFLM) enhances chondrogenic differentiation potential of human adipose derived mesenchymal stromal stem cells (hASCs).

    PubMed

    Marycz, Krzysztof; Lewandowski, Daniel; Tomaszewski, Krzysztof A; Henry, Brandon M; Golec, Edward B; Marędziak, Monika

    2016-01-01

    The aim of this study was to evaluate if low-frequency, low-magnitude vibrations (LFLM) could enhance chondrogenic differentiation potential of human adipose derived mesenchymal stem cells (hASCs) with simultaneous inhibition of their adipogenic properties for biomedical purposes. We developed a prototype device that induces low-magnitude (0.3 g) low-frequency vibrations with the following frequencies: 25, 35 and 45 Hz. Afterwards, we used human adipose derived mesenchymal stem cell (hASCS), to investigate their cellular response to the mechanical signals. We have also evaluated hASCs morphological and proliferative activity changes in response to each frequency. Induction of chondrogenesis in hASCs, under the influence of a 35 Hz signal leads to most effective and stable cartilaginous tissue formation through highest secretion of Bone Morphogenetic Protein 2 (BMP-2), and Collagen type II, with low concentration of Collagen type I. These results correlated well with appropriate gene expression level. Simultaneously, we observed significant up-regulation of α3, α4, β1 and β3 integrins in chondroblast progenitor cells treated with 35 Hz vibrations, as well as Sox-9. Interestingly, we noticed that application of 35 Hz frequencies significantly inhibited adipogenesis of hASCs. The obtained results suggest that application of LFLM vibrations together with stem cell therapy might be a promising tool in cartilage regeneration.

  11. Reduction of Surface Errors over a Wide Range of Spatial Frequencies Using a Combination of Electrolytic In-Process Dressing Grinding and Magnetorheological Finishing

    NASA Astrophysics Data System (ADS)

    Kunimura, Shinsuke; Ohmori, Hitoshi

    We present a rapid process for producing flat and smooth surfaces. In this technical note, a fabrication result of a carbon mirror is shown. Electrolytic in-process dressing (ELID) grinding with a metal bonded abrasive wheel, then a metal-resin bonded abrasive wheel, followed by a conductive rubber bonded abrasive wheel, and finally magnetorheological finishing (MRF) were performed as the first, second, third, and final steps, respectively in this process. Flatness over the whole surface was improved by performing the first and second steps. After the third step, peak to valley (PV) and root mean square (rms) values in an area of 0.72 x 0.54 mm2 on the surface were improved. These values were further improved after the final step, and a PV value of 10 nm and an rms value of 1 nm were obtained. Form errors and small surface irregularities such as surface waviness and micro roughness were efficiently reduced by performing ELID grinding using the above three kinds of abrasive wheels because of the high removal rate of ELID grinding, and residual small irregularities were reduced by short time MRF. This process makes it possible to produce flat and smooth surfaces in several hours.

  12. Automatic Locking of Laser Frequency to an Absorption Peak

    NASA Technical Reports Server (NTRS)

    Koch, Grady J.

    2006-01-01

    An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that

  13. Phase-amplitude cross-frequency coupling in EEG-derived cortical time series upon an auditory perception task.

    PubMed

    Papadaniil, Chrysa D; Kosmidou, Vasiliki E; Tsolaki, Anthoula; Tsolaki, Magda; Kompatsiaris, Ioannis Yiannis; Hadjileontiadis, Leontios J

    2015-01-01

    Recent evidence suggests that cross-frequency coupling (CFC) plays an essential role in multi-scale communication across the brain. The amplitude of the high frequency oscillations, responsible for local activity, is modulated by the phase of the lower frequency activity, in a task and region-relevant way. In this paper, we examine this phase-amplitude coupling in a two-tone oddball paradigm for the low frequency bands (delta, theta, alpha, and beta) and determine the most prominent CFCs. Data consisted of cortical time series, extracted by applying three-dimensional vector field tomography (3D-VFT) to high density (256 channels) electroencephalography (HD-EEG), and CFC analysis was based on the phase-amplitude coupling metric, namely PAC. Our findings suggest CFC spanning across all brain regions and low frequencies. Stronger coupling was observed in the delta band, that is closely linked to sensory processing. However, theta coupling was reinforced in the target tone response, revealing a task-dependent CFC and its role in brain networks communication.

  14. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  15. Atmospheric absorption model for dry air and water vapor at microwave frequencies below 100 GHz derived from spaceborne radiometer observations

    NASA Astrophysics Data System (ADS)

    Wentz, Frank J.; Meissner, Thomas

    2016-05-01

    The Liebe and Rosenkranz atmospheric absorption models for dry air and water vapor below 100 GHz are refined based on an analysis of antenna temperature (TA) measurements taken by the Global Precipitation Measurement Microwave Imager (GMI) in the frequency range 10.7 to 89.0 GHz. The GMI TA measurements are compared to the TA predicted by a radiative transfer model (RTM), which incorporates both the atmospheric absorption model and a model for the emission and reflection from a rough-ocean surface. The inputs for the RTM are the geophysical retrievals of wind speed, columnar water vapor, and columnar cloud liquid water obtained from the satellite radiometer WindSat. The Liebe and Rosenkranz absorption models are adjusted to achieve consistency with the RTM. The vapor continuum is decreased by 3% to 10%, depending on vapor. To accomplish this, the foreign-broadening part is increased by 10%, and the self-broadening part is decreased by about 40% at the higher frequencies. In addition, the strength of the water vapor line is increased by 1%, and the shape of the line at low frequencies is modified. The dry air absorption is increased, with the increase being a maximum of 20% at the 89 GHz, the highest frequency considered here. The nonresonant oxygen absorption is increased by about 6%. In addition to the RTM comparisons, our results are supported by a comparison between columnar water vapor retrievals from 12 satellite microwave radiometers and GPS-retrieved water vapor values.

  16. Error monitoring in musicians

    PubMed Central

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  17. Standard Errors of the Kernel Equating Methods under the Common-Item Design.

    ERIC Educational Resources Information Center

    Liou, Michelle; And Others

    This research derives simplified formulas for computing the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function (P. W. Holland, B. F. King, and D. T. Thayer, 1989; Holland and Thayer, 1987). The simplified formulas are applicable to equating both the…

  18. Automatic oscillator frequency control system

    NASA Technical Reports Server (NTRS)

    Smith, S. F. (Inventor)

    1985-01-01

    A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.

  19. The differentiation of human adipose-derived stem cells (hASCs) into osteoblasts is promoted by low amplitude, high frequency vibration treatment.

    PubMed

    Prè, D; Ceccarelli, G; Gastaldi, G; Asti, A; Saino, E; Visai, L; Benazzo, F; Cusella De Angelis, M G; Magenes, G

    2011-08-01

    Several studies have demonstrated that tissue culture conditions influence the differentiation of human adipose-derived stem cells (hASCs). Recently, studies performed on SAOS-2 and bone marrow stromal cells (BMSCs) have shown the effectiveness of high frequency vibration treatment on cell differentiation to osteoblasts. The aim of this study was to evaluate the effects of low amplitude, high frequency vibrations on the differentiation of hASCs toward bone tissue. In view of this goal, hASCs were cultured in proliferative or osteogenic media and stimulated daily at 30Hz for 45min for 28days. The state of calcification of the extracellular matrix was determined using the alizarin assay, while the expression of extracellular matrix and associated mRNA was determined by ELISA assays and quantitative RT-PCR (qRT-PCR). The results showed the osteogenic effect of high frequency vibration treatment in the early stages of hASC differentiation (after 14 and 21days). On the contrary, no additional significant differences were observed after 28days cell culture. Transmission Electron Microscopy (TEM) images performed on 21day samples showed evidence of structured collagen fibers in the treated samples. All together, these results demonstrate the effectiveness of high frequency vibration treatment on hASC differentiation toward osteoblasts.

  20. Multi-Frequency Synthesis

    NASA Astrophysics Data System (ADS)

    Conway, J. E.; Sault, R. J.

    Introduction; Image Fidelity; Multi-Frequency Synthesis; Spectral Effects; The Spectral Expansion; Spectral Dirty Beams; First Order Spectral Errors; Second Order Spectral Errors; The MFS Deconvolution Problem; Nature of The Problem; Map and Stack; Direct Assault; Data Weighting Methods; Double Deconvolution; The Sault Algorithm; Multi-Frequency Self-Calibration; Practical MFS; Conclusions

  1. Phase Errors and the Capture Effect

    SciTech Connect

    Blair, J., and Machorro, E.

    2011-11-01

    This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.

  2. Variational derivation of the dispersion relation of kinetic coherent modes in the acoustic frequency range in tokamaks

    SciTech Connect

    Nguyen, C.; Garbet, X.; Smolyakov, A. I.

    2008-11-15

    In the present paper, we compare two modes with frequencies belonging to the acoustic frequency range: the geodesic acoustic mode (GAM) and the Beta Alfven eigenmode (BAE). For this, a variational gyrokinetic energy principle coupled to a Fourier sidebands expansion is developed. High order finite Larmor radius and finite orbit width effects are kept. Their impact on the mode structures and on the Alfven spectrum is calculated and discussed. We show that in a local analysis, the degeneracy of the electrostatic GAM and the BAE dispersion relations is verified to a high order and based in particular on a local poloidal symmetry of the two modes. When a more global point of view is taken, and the full radial structures of the modes are computed, differences appear. The BAE structure is shown to have an enforced localization, and to possibly connect to a large magnetohydrodynamic structure. On the contrary, the GAM is seen to have a wavelike, nonlocalized structure, as long as standard slowly varying monotonic profiles are considered.

  3. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  4. Error studies for SNS Linac. Part 1: Transverse errors

    SciTech Connect

    Crandall, K.R.

    1998-12-31

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).

  5. Long-term effects of brain-derived neurotrophic factor on the frequency of inhibitory synaptic events in the rat superficial dorsal horn.

    PubMed

    Lu, V B; Colmers, W F; Smith, P A

    2009-07-21

    Chronic constriction injury (CCI) of rat sciatic nerve produces a specific pattern of electrophysiological changes in the superficial dorsal horn that lead to central sensitization that is associated with neuropathic pain. These changes can be recapitulated in spinal cord organotypic cultures by long term (5-6 days) exposure to brain-derived neurotrophic factor (BDNF) (200 ng/ml). Certain lines of evidence suggest that both CCI and BDNF increase excitatory synaptic drive to putative excitatory neurons while reducing that to putative inhibitory interneurons. Because BDNF slows the rate of discharge of synaptically-driven action potentials in inhibitory neurons, it should also decrease the frequency of spontaneous inhibitory postsynaptic currents (sIPSCs) throughout the superficial dorsal horn. To test this possibility, we characterized superficial dorsal horn neurons in organotypic cultures according to five electrophysiological phenotypes that included tonic, delay and irregular firing neurons. Five to 6 days of treatment with 200 ng/ml BDNF decreased sIPSC frequency in tonic and irregular neurons as might be expected if BDNF selectively decreases excitatory synaptic drive to inhibitory interneurons. The frequency of sIPSCs in delay neurons was however increased. Further analysis of the action of BDNF on tetrodotoxin-resistant miniature inhibitory postsynaptic currents (mIPSC) showed that the frequency was increased in delay neurons, unchanged in tonic neurons and decreased in irregular neurons. BDNF may thus reduce action potential frequency in those inhibitory interneurons that project to tonic and irregular neurons but not in those that project to delay neurons.

  6. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  7. Visual field test simulation and error in threshold estimation.

    PubMed Central

    Spenceley, S E; Henson, D B

    1996-01-01

    AIM: To establish, via computer simulation, the effects of patient response variability and staircase starting level upon the accuracy and repeatability of static full threshold visual field tests. METHOD: Patient response variability, defined by the standard deviation of the frequency of seeing versus stimulus intensity curve, is varied from 0.5 to 20 dB (in steps of 0.5 dB) with staircase starting levels ranging from 30 dB below to 30 dB above the patient's threshold (in steps of 10 dB). Fifty two threshold estimates are derived for each condition and the error of each estimate calculated (difference between the true threshold and the threshold estimate derived from the staircase procedure). The mean and standard deviation of the errors are then determined for each condition. The results from a simulated quadrantic defect (response variability set to typical values for a patient with glaucoma) are presented using two different algorithms. The first corresponds with that normally used when performing a full threshold examination while the second uses results from an earlier simulated full threshold examination for the staircase starting values. RESULTS: The mean error in threshold estimates was found to be biased towards the staircase starting level. The extent of the bias was dependent upon patient response variability. The standard deviation of the error increased both with response variability and staircase starting level. With the routinely used full threshold strategy the quadrantic defect was found to have a large mean error in estimated threshold values and an increase in the standard deviation of the error along the edge of the defect. When results from an earlier full threshold test are used as staircase starting values this error and increased standard deviation largely disappeared. CONCLUSION: The staircase procedure widely used in threshold perimetry increased the error and the variability of threshold estimates along the edges of defects. Using

  8. Ipilimumab treatment results in an early decrease in the frequency of circulating granulocytic myeloid-derived suppressor cells as well as their Arginase1 production.

    PubMed

    Pico de Coaña, Yago; Poschke, Isabel; Gentilcore, Giusy; Mao, Yumeng; Nyström, Maria; Hansson, Johan; Masucci, Giuseppe V; Kiessling, Rolf

    2013-09-01

    Blocking the immune checkpoint molecule CTL antigen-4 (CTLA-4) with ipilimumab has proven to induce long-lasting clinical responses in patients with metastatic melanoma. To study the early response that takes place after CTLA-4 blockade, peripheral blood immune monitoring was conducted in five patients undergoing ipilimumab treatment at baseline, three and nine weeks after administration of the first dose. Along with T-cell population analysis, this work was primarily focused on an in-depth study of the myeloid-derived suppressor cell (MDSC) populations. Ipilimumab treatment resulted in lower frequencies of regulatory T cells along with reduced expression levels of PD-1 at the nine-week time point. Three weeks after the initial ipilimumab dose, the frequency of granulocytic MDSCs was significantly reduced and was followed by a reduction in the frequency of arginase1-producing CD3(-) cells, indicating an indirect in trans effect that should be taken into account for future evaluations of ipilimumab mechanisms of action.

  9. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  10. A novel concept to derive iodine status of human populations from frequency distribution properties of a hair iodine concentration.

    PubMed

    Prejac, J; Višnjević, V; Drmić, S; Skalny, A A; Mimica, N; Momčilović, B

    2014-04-01

    Today, human iodine deficiency is next to iron the most common nutritional deficiency in developed European and underdeveloped third world countries, respectively. A current biological indicator of iodine status is urinary iodine that reflects the very recent iodine exposure, whereas some long term indicator of iodine status remains to be identified. We analyzed hair iodine in a prospective, observational, cross-sectional, and exploratory study involving 870 apparently healthy Croatians (270 men and 600 women). Hair iodine was analyzed with the inductively coupled plasma mass spectrometry (ICP MS). Population (n870) hair iodine (IH) respective median was 0.499μgg(-1) (0.482 and 0.508μgg(-1)) for men and women, respectively, suggesting no sex related difference. We studied the hair iodine uptake by the logistic sigmoid saturation curve of the median derivatives to assess iodine deficiency, adequacy and excess. We estimated the overt iodine deficiency to occur when hair iodine concentration is below 0.15μgg(-1). Then there was a saturation range interval of about 0.15-2.0μgg(-1) (r(2)=0.994). Eventually, the sigmoid curve became saturated at about 2.0μgg(-1) and upward, suggesting excessive iodine exposure. Hair appears to be a valuable and robust long term biological indicator tissue for assessing the iodine body status. We propose adequate iodine status to correspond with the hair iodine (IH) uptake saturation of 0.565-0.739μgg(-1) (55-65%).

  11. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

  12. Evaluation of GPS Standard Point Positioning with Various Ionospheric Error Mitigation Techniques

    NASA Astrophysics Data System (ADS)

    Panda, Sampad K.; Gedam, Shirish S.

    2016-12-01

    The present paper investigates accuracy of single and dual-frequency Global Positioning System (GPS) standard point positioning solutions employing different ionosphere error mitigation techniques. The total electron content (TEC) in the ionosphere is the prominent delay error source in GPS positioning, and its elimination is essential for obtaining a relatively precise positioning solution. The estimated delay error from different ionosphere models and maps, such as Klobuchar model, global ionosphere models, and vertical TEC maps are compared with the locally derived ionosphere error following the ion density and frequency dependence with delay error. Finally, the positional accuracy of the single and dual-frequency GPS point positioning solutions are probed through different ionospheric mitigation methods including exploitation of models, maps, and ionosphere-free linear combinations and removal of higher order ionospheric effects. The results suggest the superiority of global ionosphere maps for single-frequency solution, whereas for the dual-frequency measurement the ionosphere-free linear combination with prior removal of higher-order ionosphere effects from global ionosphere maps and geomagnetic reference fields resulted in improved positioning quality among the chosen mitigation techniques. Conspicuously, the susceptibility of height component to different ionospheric mitigation methods are demonstrated in this study which may assist the users in selecting appropriate technique for precise GPS positioning measurements.

  13. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  14. Constraining frequency-magnitude-area relationships for rainfall and flood discharges using radar-derived precipitation estimates: example applications in the Upper and Lower Colorado River basins, USA

    NASA Astrophysics Data System (ADS)

    Orem, Caitlin A.; Pelletier, Jon D.

    2016-11-01

    Flood-envelope curves (FECs) are useful for constraining the upper limit of possible flood discharges within drainage basins in a particular hydroclimatic region. Their usefulness, however, is limited by their lack of a well-defined recurrence interval. In this study we use radar-derived precipitation estimates to develop an alternative to the FEC method, i.e., the frequency-magnitude-area-curve (FMAC) method that incorporates recurrence intervals. The FMAC method is demonstrated in two well-studied US drainage basins, i.e., the Upper and Lower Colorado River basins (UCRB and LCRB, respectively), using Stage III Next-Generation-Radar (NEXRAD) gridded products and the diffusion-wave flow-routing algorithm. The FMAC method can be applied worldwide using any radar-derived precipitation estimates. In the FMAC method, idealized basins of similar contributing area are grouped together for frequency-magnitude analysis of precipitation intensity. These data are then routed through the idealized drainage basins of different contributing areas, using contributing-area-specific estimates for channel slope and channel width. Our results show that FMACs of precipitation discharge are power-law functions of contributing area with an average exponent of 0.82 ± 0.06 for recurrence intervals from 10 to 500 years. We compare our FMACs to published FECs and find that for wet antecedent-moisture conditions, the 500-year FMAC of flood discharge in the UCRB is on par with the US FEC for contributing areas of ˜ 102 to 103 km2. FMACs of flood discharge for the LCRB exceed the published FEC for the LCRB for contributing areas in the range of ˜ 103 to 104 km2. The FMAC method retains the power of the FEC method for constraining flood hazards in basins that are ungauged or have short flood records, yet it has the added advantage that it includes recurrence-interval information necessary for estimating event probabilities.

  15. Phase velocity limit of high-frequency photon density waves

    NASA Astrophysics Data System (ADS)

    Haskell, Richard C.; Svaasand, Lars O.; Madsen, Sten; Rojas, Fabio E.; Feng, T.-C.; Tromberg, Bruce J.

    1995-05-01

    In frequency-domain photon migration (FDPM), two factors make high modulation frequencies desirable. First, with frequencies as high as a few GHz, the phase lag versus frequency plot has sufficient curvature to yield both the scattering and absorption coefficients of the tissue under examination. Second, because of increased attenuation, high frequency photon density waves probe smaller volumes, an asset in small volume in vivo or in vitro studies. This trend toward higher modulation frequencies has led us to re-examine the derivation of the standard diffusion equation (SDE) from the Boltzman transport equation. We find that a second-order time-derivative term, ordinarily neglected in the derivation, can be significant above 1 GHz for some biological tissue. The revised diffusion equation, including the second-order time-derivative, is often termed the P1 equation. We compare the dispersion relation of the P1 equation with that of the SDE. The P1 phase velocity is slower than that predicted by the SDE; in fact, the SDE phase velocity is unbounded with increasing modulation frequency, while the P1 phase velocity approaches c/sqrt(3) is attained only at modulation frequencies with periods shorter than the mean time between scatterings of a photon, a frequency regime that probes the medium beyond the applicability of diffusion theory. Finally we caution that values for optical properties deduced from FDPM data at high frequencies using the SDE can be in error by 30% or more.

  16. Increased frequencies of CD11b(+) CD33(+) CD14(+) HLA-DR(low) myeloid-derived suppressor cells are an early event in melanoma patients.

    PubMed

    Rudolph, Berenice M; Loquai, Carmen; Gerwe, Alexander; Bacher, Nicole; Steinbrink, Kerstin; Grabbe, Stephan; Tuettenberg, Andrea

    2014-03-01

    Myeloid-derived suppressor cells (MDSC) are a heterogeneous cell population characterized by immunosuppressive activity. Elevated levels of MDSC in peripheral blood are found in inflammatory diseases as well as in malignant tumors where they are supposed to be major contributors to mechanisms of tumor-associated tolerance. We investigated the frequency and function of MDSC in peripheral blood of melanoma patients and observed an accumulation of CD11b(+) CD33(+) CD14(+) HLA-DR(low) MDSC in all stages of disease (I-IV), including early stage I patients. Disease progression and enhanced tumor burden did not result in a further increase in frequencies or change in phenotype of MDSC. By investigation of specific MDSC-associated cytokines in patients' sera, we found an accumulation of IL-8 in all stages of disease. T-cell proliferation assays revealed that MDSC critically contribute to suppressed antigen-specific T-cell reactivity and thus might explain the frequently observed transient effects of immunotherapeutic strategies in melanoma patients.

  17. Cisplatin selectively downregulated the frequency and immunoinhibitory function of myeloid-derived suppressor cells in a murine B16 melanoma model.

    PubMed

    Huang, Xiang; Cui, Shiyun; Shu, Yongqian

    2016-02-01

    The objective of this study was to investigate the immunomodulatory effect of cisplatin (DDP) on the frequency, phenotype and function of myeloid-derived suppressor cells (MDSC) in a murine B16 melanoma model. C57BL/6 mice were inoculated with B16 cells to establish the murine melanoma model and randomly received treatment with different doses of DDP. The percentages and phenotype of MDSC after DDP treatment were detected by flow cytometry. The immunoinhibitory function of MDSC was analyzed by assessing the immune responses of cocultured effector cells through CFSE-labeling assay, detection of interferon-γ production and MTT cytotoxic assay, respectively. Tumor growth and mice survival were monitored to evaluate the antitumor effect of combined DDP and adoptive cytokine-induced killer (CIK) cell therapy. DDP treatment selectively decreased the percentages, modulated the surface molecules and attenuated the immunoinhibitory effects of MDSC in murine melanoma model. The combination of DDP treatment and CIK therapy exerted synergistic antitumor effect against B16 melanoma. DDP treatment selectively downregulated the frequency and immunoinhibitory function of MDSC in B16 melanoma model, indicating the potential mechanisms mediating its immunomodulatory effect.

  18. Pre-Vaccination Frequencies of Th17 Cells Correlate with Vaccine-Induced T-Cell Responses to Survivin-Derived Peptide Epitopes.

    PubMed

    Køllgaard, Tania; Ugurel-Becker, Selma; Idorn, Manja; Andersen, Mads Hald; Becker, Jürgen C; Straten, Per Thor

    2015-01-01

    Various subsets of immune regulatory cells are suggested to influence the outcome of therapeutic antigen-specific anti-tumor vaccinations. We performed an exploratory analysis of a possible correlation of pre-vaccination Th17 cells, MDSCs, and Tregs with both vaccination-induced T-cell responses as well as clinical outcome in metastatic melanoma patients vaccinated with survivin-derived peptides. Notably, we observed dysfunctional Th1 and cytotoxic T cells, i.e. down-regulation of the CD3ζchain (p=0.001) and an impaired IFNγ-production (p=0.001) in patients compared to healthy donors, suggesting an altered activity of immune regulatory cells. Moreover, the frequencies of Th17 cells (p=0.03) and Tregs (p=0.02) were elevated as compared to healthy donors. IL-17-secreting CD4+ T cells displayed an impact on the immunological and clinical effects of vaccination: Patients characterized by high frequencies of Th17 cells at pre-vaccination were more likely to develop survivin-specific T-cell reactivity post-vaccination (p=0.03). Furthermore, the frequency of Th17 (p=0.09) and Th17/IFNγ+ (p=0.19) cells associated with patient survival after vaccination. In summary, our explorative, hypothesis-generating study demonstrated that immune regulatory cells, in particular Th17 cells, play a relevant role for generation of the vaccine-induced anti-tumor immunity in cancer patients, hence warranting further investigation to test for validity as predictive biomarkers.

  19. Perturbative treatment of scalar-relativistic effects in coupled-cluster calculations of equilibrium geometries and harmonic vibrational frequencies using analytic second-derivative techniques

    NASA Astrophysics Data System (ADS)

    Michauk, Christine; Gauss, Jürgen

    2007-07-01

    An analytic scheme for the computation of scalar-relativistic corrections to nuclear forces is presented. Relativistic corrections are included via a perturbative treatment involving the mass-velocity and the one-electron and two-electron Darwin terms. Such a scheme requires mixed second derivatives of the nonrelativistic energy with respect to the relativistic perturbation and the nuclear coordinates and can be implemented using available second-derivative techniques. Our implementation for Hartree-Fock self-consistent field, second-order Møller-Plesset perturbation theory, as well as the coupled-cluster level is used to investigate the relativistic effects on the geometrical parameters and harmonic vibrational frequencies for a set of molecules containing light elements (HX, X =F, Cl, Br; H2X, X =O, S; HXY, X =O, S and Y =F, Cl, Br). The focus of our calculations is the basis-set dependence of the corresponding relativistic effects, additivity of electron correlation and relativistic effects, and the importance of core correlation on relativistic effects.

  20. Perturbative treatment of scalar-relativistic effects in coupled-cluster calculations of equilibrium geometries and harmonic vibrational frequencies using analytic second-derivative techniques.

    PubMed

    Michauk, Christine; Gauss, Jürgen

    2007-07-28

    An analytic scheme for the computation of scalar-relativistic corrections to nuclear forces is presented. Relativistic corrections are included via a perturbative treatment involving the mass-velocity and the one-electron and two-electron Darwin terms. Such a scheme requires mixed second derivatives of the nonrelativistic energy with respect to the relativistic perturbation and the nuclear coordinates and can be implemented using available second-derivative techniques. Our implementation for Hartree-Fock self-consistent field, second-order Moller-Plesset perturbation theory, as well as the coupled-cluster level is used to investigate the relativistic effects on the geometrical parameters and harmonic vibrational frequencies for a set of molecules containing light elements (HX, X=F, Cl, Br; H2X, X=O, S; HXY, X=O, S and Y=F, Cl, Br). The focus of our calculations is the basis-set dependence of the corresponding relativistic effects, additivity of electron correlation and relativistic effects, and the importance of core correlation on relativistic effects.

  1. Intermediate frequency magnetic field at 23 kHz does not modify gene expression in human fetus-derived astroglia cells.

    PubMed

    Sakurai, Tomonori; Narita, Eijiro; Shinohara, Naoki; Miyakoshi, Junji

    2012-12-01

    The increased use of induction heating (IH) cooktops in Japan and Europe has raised public concern on potential health effects of the magnetic fields generated by IH cooktops. In this study, we evaluated the effects of intermediate frequency (IF) magnetic fields generated by IH cooktops on gene expression profiles. Human fetus-derived astroglia cells were exposed to magnetic fields at 23 kHz and 100 µT(rms) for 2, 4, and 6 h and gene expression profiles in cells were assessed using cDNA microarray. There were no detectable effects of the IF magnetic fields at 23 kHz on the gene expression profile, whereas the heat treatment at 43 °C for 2 h, as a positive control, affected gene expression including inducing heat shock proteins. Principal component analysis and hierarchical analysis showed that the gene profiles of IF-exposed groups were similar to the sham-exposed group and were different than the heat treatment group. These results demonstrated that exposure of human fetus-derived astroglia cells to an IF magnetic field at 23 kHz and 100 µT(rms) for up to 6 h did not induce detectable changes in gene expression profile.

  2. Study of geopotential error models used in orbit determination error analysis

    NASA Technical Reports Server (NTRS)

    Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

    1991-01-01

    The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

  3. Single antenna phase errors for NAVSPASUR receivers

    NASA Astrophysics Data System (ADS)

    Andrew, M. D.; Wadiak, E. J.

    1988-11-01

    Interferometrics Inc. has investigated the phase errors on single antenna NAVSPASUR data. We find that the single antenna phase errors are well modeled as a function of signal strength only. The phase errors associated with data from the Kickapoo transmitter are larger than the errors from the low-power transmitters (i.e., Gila River and Jordan Lake). Further, the errors in the phase data associated with the Kickapoo transmitter show significant variability among data taken on different days. We have applied a quadratic polynomial fit to the single antenna phases to derive the Doppler shift and chirp, and we have estimated the formal errors associated with these quantities. These formal errors have been parameterized as a function of peak signal strength and number of data frames. We find that for a typical satellite observation the derived Doppler shift has a formal error of approx. 0.2 Hz and the derived chirp has a formal error of 0 less than or approx. 1 Hz/sec. There is a clear systematic bias in the derived chirp for targets illuminated by the Kickapoo transmitter. Near-field effects probably account for the larger phase errors and the chirp bias of the Kickapoo transmitter.

  4. Assessment of Intensity-Duration-Frequency curves for the Eastern Mediterranean region derived from high-resolution satellite and radar rainfall estimates

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat; Peleg, Nadav; Mei, Yiwen; Anagnostou, Emmanouil N.

    2016-04-01

    Intensity-duration-frequency (IDF) curves are used in flood risk management and hydrological design studies to relate the characteristics of a rainfall event to the probability of its occurrence. The usual approach relies on long records of raingauge data providing accurate estimates of the IDF curves for a specific location, but whose representativeness decreases with distance. Radar rainfall estimates have recently been tested over the Eastern Mediterranean area, characterized by steep climatological gradients, showing that radar IDF curves generally lay within the raingauge confidence interval and that radar is able to identify the climatology of extremes. Recent availability of relatively long records (>15 years) of high resolution satellite rainfall information allows to explore the spatial distribution of extreme rainfall with increased detail over wide areas, thus providing new perspectives for the study of precipitation regimes and promising both practical and theoretical implications. This study aims to (i) identify IDF curves obtained from radar rainfall estimates and (ii) identify and assess IDF curves obtained from two high resolution satellite retrieval algorithms (CMORPH and PERSIANN) over the Eastern Mediterranean region. To do so, we derive IDF curves fitting a GEV distribution to the annual maxima series from 23 years (1990-2013) of carefully corrected data from a C-Band radar located in Israel (covering Mediterranean to arid climates) as well as from 15 years (1998-2014) of gauge-adjusted high-resolution CMORPH and 10 years (2003-2013) of gauge-adjusted high-resolution PERSIANN data. We present the obtained IDF curves and we compare the curves obtained from the satellite algorithms to the ones obtained from the radar during overlapping periods; this analysis will draw conclusions on the reliability of the two satellite datasets for deriving rainfall frequency analysis over the region and provide IDF corrections. We compare then the curves obtained

  5. Computing Instantaneous Frequency by normalizing Hilbert Transform

    DOEpatents

    Huang, Norden E.

    2005-05-31

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  6. Computing Instantaneous Frequency by normalizing Hilbert Transform

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  7. Errors in general practice: development of an error classification and pilot study of a method for detecting errors

    PubMed Central

    Rubin, G; George, A; Chinn, D; Richardson, C

    2003-01-01

    Objective: To describe a classification of errors and to assess the feasibility and acceptability of a method for recording staff reported errors in general practice. Design: An iterative process in a pilot practice was used to develop a classification of errors. This was incorporated in an anonymous self-report form which was then used to collect information on errors during June 2002. The acceptability of the reporting process was assessed using a self-completion questionnaire. Setting: UK general practice. Participants: Ten general practices in the North East of England. Main outcome measures: Classification of errors, frequency of errors, error rates per 1000 appointments, acceptability of the process to participants. Results: 101 events were used to create an initial error classification. This contained six categories: prescriptions, communication, appointments, equipment, clinical care, and "other" errors. Subsequently, 940 errors were recorded in a single 2 week period from 10 practices, providing additional information. 42% (397/940) were related to prescriptions, although only 6% (22/397) of these were medication errors. Communication errors accounted for 30% (282/940) of errors and clinical errors 3% (24/940). The overall error rate was 75.6/1000 appointments (95% CI 71 to 80). The method of error reporting was found to be acceptable by 68% (36/53) of respondents with only 8% (4/53) finding the process threatening. Conclusion: We have developed a classification of errors and described a practical and acceptable method for reporting them that can be used as part of the process of risk management. Errors are common and, although all have the potential to lead to an adverse event, most are administrative. PMID:14645760

  8. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  9. Error field penetration and locking to the backward propagating wave

    SciTech Connect

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.

  10. Error field penetration and locking to the backward propagating wave

    DOE PAGES

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less

  11. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Fujiwara, T.; Lin, S.

    1986-01-01

    In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.

  12. Motion measurement errors and autofocus in bistatic SAR.

    PubMed

    Rigling, Brian D; Moses, Randolph L

    2006-04-01

    This paper discusses the effect of motion measurement errors (MMEs) on measured bistatic synthetic aperture radar (SAR) phase history data that has been motion compensated to the scene origin. We characterize the effect of low-frequency MMEs on bistatic SAR images, and, based on this characterization, we derive limits on the allowable MMEs to be used as system specifications. Finally, we demonstrate that proper orientation of a bistatic SAR image during the image formation process allows application of monostatic SAR autofocus algorithms in postprocessing to mitigate image defocus.

  13. Error analysis of quartz crystal resonator applications

    SciTech Connect

    Lucklum, R.; Behling, C.; Hauptmann, P.; Cernosek, R.W.; Martin, S.J.

    1996-12-31

    Quartz crystal resonators in chemical sensing applications are usually configured as the frequency determining element of an electrical oscillator. By contrast, the shear modulus determination of a polymer coating needs a complete impedance analysis. The first part of this contribution reports the error made if common approximations are used to relate the frequency shift to the sorbed mass. In the second part the authors discuss different error sources in the procedure to determine shear parameters.

  14. A Review of Errors in the Journal Abstract

    ERIC Educational Resources Information Center

    Lee, Eunpyo; Kim, Eun-Kyung

    2013-01-01

    (percentage) of abstracts that involved with errors, the most erroneous part of the abstract, and the types and frequency of errors. Also the purpose expanded to compare the results with those of the previous…

  15. Errors associated with outpatient computerized prescribing systems

    PubMed Central

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  16. Quantitative prediction of radio frequency induced local heating derived from measured magnetic field maps in magnetic resonance imaging: A phantom validation at 7 T

    SciTech Connect

    Zhang, Xiaotong; Liu, Jiaen; Van de Moortele, Pierre-Francois; Schmitter, Sebastian; He, Bin

    2014-12-15

    Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate the feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.

  17. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  18. GP-B error modeling and analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The analysis and modeling for the Gravity Probe B (GP-B) experiment is reported. The finite-wordlength induced errors in Kalman filtering computation were refined. Errors in the crude result were corrected, improved derivation steps are taken, and better justifications are given. The errors associated with the suppression of the 1-noise were analyzed by rolling the spacecraft and then performing a derolling operation by computation.

  19. M-ary frequency shift keying with differential phase detector in satellite mobile channel with narrowband receiver filter

    NASA Astrophysics Data System (ADS)

    Korn, I.; Namet, M.

    1990-02-01

    An expression is derived for the error probability of M-ary frequency-shift keying with differential phase detector and narrow-band receiver filter in the satellite mobile (Rician) channel, which includes as special cases the Gaussian and land mobile (Rayleigh) channels. The error probability is computed as a function of various system parameters for M = 2, 4, and 8 symbols and the third-order Butterworth receiver filter. The error probability increases with Doppler frequency and with the shift of the channel from Gaussian through Rician to Rayleigh. The optimum normalized bandwidth per bit is in the vicinity of one, and the optimum modulation index for binary symbols is about 0.6. The threshold for quaternary symbols can be optimized to about 0.9 of the modulation index. For Rician and Rayleigh channels with nonzero Doppler frequency, there is an error floor; therefore, diversity or coding may be required to achieve a desired error probability.

  20. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  1. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  2. AUTOMATIC FREQUENCY CONTROL SYSTEM

    DOEpatents

    Hansen, C.F.; Salisbury, J.D.

    1961-01-10

    A control is described for automatically matching the frequency of a resonant cavity to that of a driving oscillator. The driving oscillator is disconnected from the cavity and a secondary oscillator is actuated in which the cavity is the frequency determining element. A low frequency is mixed with the output of the driving oscillator and the resultant lower and upper sidebands are separately derived. The frequencies of the sidebands are compared with the secondary oscillator frequency. deriving a servo control signal to adjust a tuning element in the cavity and matching the cavity frequency to that of the driving oscillator. The driving oscillator may then be connected to the cavity.

  3. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  4. Low-Speed Investigation of the Effects of Frequency and Amplitude of Oscillation in Sideslip on the Lateral Stability Derivatives of a 60 deg Delta Wing, a 45 deg Sweptback Wing and an Unswept Wing

    NASA Technical Reports Server (NTRS)

    Lichtenstein, Jacob H.; Williams, James L.

    1961-01-01

    A low-speed investigation has been conducted in the Langley stability tunnel to study the effects of frequency and amplitude of sideslipping motion on the lateral stability derivatives of a 60 deg. delta wing, a 45 deg. sweptback wing, and an unswept wing. The investigation was made for values of the reduced-frequency parameter of 0.066 and 0.218 and for a range of amplitudes from +/- 2 to +/- 6 deg. The results of the investigation indicated that increasing the frequency of the oscillation generally produced an appreciable change in magnitude of the lateral oscillatory stability derivatives in the higher angle-of-attack range. This effect was greatest for the 60 deg. delta wing and smallest for the unswept wing and generally resulted in a more linear variation of these derivatives with angle of attack. For the relatively high frequency at which the amplitude was varied, there appeared to be little effect on the measured derivatives as a result of the change in amplitude of the oscillation.

  5. A Frequency and Error Analysis of the Use of Determiners, the Relationships between Noun Phrases, and the Structure of Discourse in English Essays by Native English Writers and Native Chinese, Taiwanese, and Korean Learners of English as a Second Language

    ERIC Educational Resources Information Center

    Gressang, Jane E.

    2010-01-01

    Second language (L2) learners notoriously have trouble using articles in their target languages (e.g., "a", "an", "the" in English). However, researchers disagree about the patterns and causes of these errors. Past studies have found that L2 English learners: (1) Predominantly omit articles (White 2003, Robertson 2000), (2) Overuse "the" (Huebner…

  6. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  7. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  8. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  9. Design and Quasi-Equilibrium Analysis of a Distributed Frequency-Restoration Controller for Inverter-Based Microgrids

    SciTech Connect

    Ainsworth, Nathan G; Grijalva, Prof. Santiago

    2013-01-01

    This paper discusses a proposed frequency restoration controller which operates as an outer loop to frequency droop for voltage-source inverters. By quasi-equilibrium analysis, we show that the proposed controller is able to provide arbitrarily small steady-state frequency error while maintaing power sharing between inverters without need for communication or centralized control. We derive rate of convergence, discuss design considerations (including a fundamental trade-off that must be made in design), present a design procedure to meet a maximum frequency error requirement, and show simulation results verifying our analysis and design method. The proposed controller will allow flexible plug-and-play inverter-based networks to meet a specified maximum frequency error requirement.

  10. Medication errors: definitions and classification.

    PubMed

    Aronson, Jeffrey K

    2009-06-01

    1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.

  11. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  12. Performance Errors in Weight Training and Their Correction.

    ERIC Educational Resources Information Center

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  13. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  14. Empathy and error processing.

    PubMed

    Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A

    2010-05-01

    Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.

  15. Error analysis of compensation cutting technique for wavefront error of KH2PO4 crystal.

    PubMed

    Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Zhu, Dengchao; Song, Bing

    2013-09-20

    Considering the wavefront error of KH(2)PO(4) (KDP) crystal is difficult to control through face fly cutting process because of surface shape deformation during vacuum suction, an error compensation technique based on a spiral turning method is put forward. An in situ measurement device is applied to measure the deformed surface shape after vacuum suction, and the initial surface figure error, which is obtained off-line, is added to the in situ surface shape to obtain the final surface figure to be compensated. Then a three-axis servo technique is utilized to cut the final surface shape. In traditional cutting processes, in addition to common error sources such as the error in the straightness of guide ways, spindle rotation error, and error caused by ambient environment variance, three other errors, the in situ measurement error, position deviation error, and servo-following error, are the main sources affecting compensation accuracy. This paper discusses the effect of these three errors on compensation accuracy and provides strategies to improve the final surface quality. Experimental verification was carried out on one piece of KDP crystal with the size of Φ270 mm×11 mm. After one compensation process, the peak-to-valley value of the transmitted wavefront error dropped from 1.9λ (λ=632.8 nm) to approximately 1/3λ, and the mid-spatial-frequency error does not become worse when the frequency of the cutting tool trajectory is controlled by use of a low-pass filter.

  16. Perceptual Bias in Speech Error Data Collection: Insights from Spanish Speech Errors

    ERIC Educational Resources Information Center

    Perez, Elvira; Santiago, Julio; Palma, Alfonso; O'Seaghdha, Padraig G.

    2007-01-01

    This paper studies the reliability and validity of naturalistic speech errors as a tool for language production research. Possible biases when collecting naturalistic speech errors are identified and specific predictions derived. These patterns are then contrasted with published reports from Germanic languages (English, German and Dutch) and one…

  17. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  18. Diagnostic errors in interactive telepathology.

    PubMed

    Stauch, G; Schweppe, K W; Kayser, K

    2000-01-01

    Telepathology (TP) as a service in pathology at a distance is now widely used. It is integrated in the daily workflow of numerous pathologists. Meanwhile, in Germany 15 departments of pathology are using the telepathology technique for frozen section service; however, a common recognised quality standard in diagnostic accuracy is still missing. In a first step, the working group Aurich uses a TP system for frozen section service in order to analyse the frequency and sources of errors in TP frozen section diagnoses for evaluating the quality of frozen section slides, the important components of image quality and their influences an diagnostic accuracy. The authors point to the necessity of an optimal training program for all participants in this service in order to reduce the risk of diagnostic errors. In addition, there is need for optimal cooperation of all partners involved in TP service.

  19. Errors in finite-difference computations on curvilinear coordinate systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. W.; Thompson, J. F.

    1980-01-01

    Curvilinear coordinate systems were used extensively to solve partial differential equations on arbitrary regions. An analysis of truncation error in the computation of derivatives revealed why numerical results may be erroneous. A more accurate method of computing derivatives is presented.

  20. Image defects from surface and alignment errors in grazing incidence telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.

    1989-01-01

    The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.

  1. Estimating diversity via frequency ratios.

    PubMed

    Willis, Amy; Bunge, John

    2015-12-01

    We wish to estimate the total number of classes in a population based on sample counts, especially in the presence of high latent diversity. Drawing on probability theory that characterizes distributions on the integers by ratios of consecutive probabilities, we construct a nonlinear regression model for the ratios of consecutive frequency counts. This allows us to predict the unobserved count and hence estimate the total diversity. We believe that this is the first approach to depart from the classical mixed Poisson model in this problem. Our method is geometrically intuitive and yields good fits to data with reasonable standard errors. It is especially well-suited to analyzing high diversity datasets derived from next-generation sequencing in microbial ecology. We demonstrate the method's performance in this context and via simulation, and we present a dataset for which our method outperforms all competitors.

  2. Error estimates of numerical solutions for a cyclic plasticity problem

    NASA Astrophysics Data System (ADS)

    Han, W.

    A cyclic plasticity problem is numerically analyzed in [13], where a sub-optimal order error estimate is shown for a spatially discrete scheme. In this note, we prove an optimal order error estimate for the spatially discrete scheme under the same solution regularity condition. We also derive an error estimate for a fully discrete scheme for solving the plasticity problem.

  3. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  4. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  5. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  6. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  7. A Review of the Literature on Computational Errors With Whole Numbers. Mathematics Education Diagnostic and Instructional Centre (MEDIC).

    ERIC Educational Resources Information Center

    Burrows, J. K.

    Research on error patterns associated with whole number computation is reviewed. Details of the results of some of the individual studies cited are given in the appendices. In Appendix A, 33 addition errors, 27 subtraction errors, 41 multiplication errors, and 41 division errors are identified, and the frequency of these errors made by 352…

  8. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  9. Error and adjustment of reflecting prisms

    NASA Astrophysics Data System (ADS)

    Mao, Wenwei

    1997-12-01

    A manufacturing error in the orientation of the working planes of a reflecting prism, such as an angle error or an edge error, will cause the optical axis to deviate and the image to lean. So does an adjustment (position error) of a reflecting prism. A universal method to be used to calculate the optical axis deviation and the image lean caused by the manufacturing error of a reflecting prism is presented. It is suited to all types of reflecting prisms. A means to offset the position error against the manufacturing error of a reflecting prism and the changes of image orientation is discussed. For the calculation to be feasible, a surface named the 'separating surface' is introduced just in front of the real exit face of a real prism. It is the image of the entrance face formed by all reflecting surfaces of the real prism. It can be used to separate the image orientation change caused by the error of the prism's reflecting surfaces from the image orientation change caused by the error of the prism's refracting surface. Based on ray tracing, a set of simple and explicit formulas of the optical axis deviation and the image lean for a general optical wedge is derived.

  10. Characterisation of residual ionospheric errors in bending angles using GNSS RO end-to-end simulations

    NASA Astrophysics Data System (ADS)

    Liu, C. L.; Kirchengast, G.; Zhang, K. F.; Norman, R.; Li, Y.; Zhang, S. C.; Carter, B.; Fritzer, J.; Schwaerz, M.; Choy, S. L.; Wu, S. Q.; Tan, Z. X.

    2013-09-01

    Global Navigation Satellite System (GNSS) radio occultation (RO) is an innovative meteorological remote sensing technique for measuring atmospheric parameters such as refractivity, temperature, water vapour and pressure for the improvement of numerical weather prediction (NWP) and global climate monitoring (GCM). GNSS RO has many unique characteristics including global coverage, long-term stability of observations, as well as high accuracy and high vertical resolution of the derived atmospheric profiles. One of the main error sources in GNSS RO observations that significantly affect the accuracy of the derived atmospheric parameters in the stratosphere is the ionospheric error. In order to mitigate the effect of this error, the linear ionospheric correction approach for dual-frequency GNSS RO observations is commonly used. However, the residual ionospheric errors (RIEs) can be still significant, especially when large ionospheric disturbances occur and prevail such as during the periods of active space weather. In this study, the RIEs were investigated under different local time, propagation direction and solar activity conditions and their effects on RO bending angles are characterised using end-to-end simulations. A three-step simulation study was designed to investigate the characteristics of the RIEs through comparing the bending angles with and without the effects of the RIEs. This research forms an important step forward in improving the accuracy of the atmospheric profiles derived from the GNSS RO technique.

  11. Single Antenna Phase Errors for NAVSPASUR Receivers

    DTIC Science & Technology

    1988-11-30

    with data from the Kickapoo transmitter 3 are larger than the errors from the low-power transmitters (i.e., Gila River and Jordan Lake). Further, the...errors in the phase data associated with the Kickapoo transmitter show significant variability among data taken on different days.i We have applied a...a clear systematic bias in the derived chirp for targets illuminated by the Kickapoo transmitter. Near-field effects probably account for the larger

  12. Auditory nerve spatial encoding of high-frequency pure tones: population response profiles derived from d' measure associated with nearby places along the cochlea.

    PubMed

    Kim, D O; Parham, K

    1991-03-01

    We examined a measure of discriminability in auditory nerve (AN) population responses that may underlie behavioral frequency discrimination of high-frequency pure tones in the cat. Population responses of high- (greater than = 15 spikes/s) and low- (less than 15 spikes/s) spontaneous rate (SR) AN fibers in unanesthetized decerebrate cats to 5 kHz pure tones were measured in the form of mean, mu, and standard deviation, sigma, of spike counts for 0.2 s tone bursts. The AN responses were analyzed in terms of a d'e(x, delta x) associated with adjoining cochlear places as defined in the manner of signal detection theory. We also examined sigma d'e(x, delta x), a spatial summation of the discriminability measure. The major findings are: (1) the d'e(x, delta x) function conveys information about 5 kHz pure tone frequency over a region of +/- 0.5 to 1.0 octave, or +/- 1.67 to 3.33 mm, around the characteristic place (CP), with the region being narrower at lower stimulus levels; (2) at 30 dB SPL, the integrated d'e(x, delta x) discriminability scores are similar for the apical and basal regions surrounding the CP whereas, at 70 dB SPL, the scores are higher for the apical region than for the basal region; and (3) at 50 and 70 dB SPL, the integrated d'e(x, delta x) discriminability scores of low-SR fibers were higher than those of high-SR fibers although, at 30 dB SPL, the latter were higher than the former. By using the cat cochlear frequency-place relationship and the inner hair cell (IHC) spacing, we interpret that the cat's frequency difference limen, delta f/f = 0.0088 at 4 kHz [Elliott et al., 1960, J. Acoust. Soc. Am. 32, 380-384], corresponds to a shift of cochlear excitation profile by 4.5 IHCs. From the present analysis of AN responses, we conclude that, for high-frequency pure tones, the d'e(x, delta x) code, an example of rate-place code, of frequency provides sufficient information to support the cat's behavioral frequency discrimination.

  13. A spectral filter for ESMR's sidelobe errors

    NASA Technical Reports Server (NTRS)

    Chesters, D.

    1979-01-01

    Fourier analysis was used to remove periodic errors from a series of NIMBUS-5 electronically scanned microwave radiometer brightness temperatures. The observations were all taken from the midnight orbits over fixed sites in the Australian grasslands. The angular dependence of the data indicates calibration errors consisted of broad sidelobes and some miscalibration as a function of beam position. Even though an angular recalibration curve cannot be derived from the available data, the systematic errors can be removed with a spectral filter. The 7 day cycle in the drift of the orbit of NIMBUS-5, coupled to the look-angle biases, produces an error pattern with peaks in its power spectrum at the weekly harmonics. About plus or minus 4 K of error is removed by simply blocking the variations near two- and three-cycles-per-week.

  14. Relationships between GPS-signal propagation errors and EISCAT observations

    NASA Astrophysics Data System (ADS)

    Jakowski, N.; Sardon, E.; Engler, E.; Jungstand, A.; Klähn, D.

    1996-12-01

    When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS) are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC) along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS), the TEC over Europe is estimated within the geographic ranges -20°leqleq40°E and 32.5°leqleq70°N in longitude and latitude, respectively. The derived TEC maps over Europe contribute to the study of horizontal coupling and transport proces- ses during significant ionospheric events. Due to their comprehensive information about the high-latitude ionosphere, EISCAT observations may help to study the influence of ionospheric phenomena upon propagation errors in GPS navigation systems. Since there are still some accuracy limiting problems to be solved in TEC determination using GPS, data comparison of TEC with vertical electron density profiles derived from EISCAT observations is valuable to enhance the accuracy of propagation-error estimations. This is evident both for absolute TEC calibration as well as for the conversion of ray-path-related observations to vertical TEC. The combination of EISCAT data and GPS-derived TEC data enables a better understanding of large-scale ionospheric processes. Acknowledgements. This work has been supported by the UK Particle-Physics and Astronomy Research Council. The assistance of the director and staff of the EISCAT Scientific Association, the staff of the Norsk Polarinstitutt

  15. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  16. Approximate Minimum Bit Error Rate Equalization for Fading Channels

    NASA Astrophysics Data System (ADS)

    Kovacs, Lorant; Levendovszky, Janos; Olah, Andras; Treplan, Gergely

    2010-12-01

    A novel channel equalizer algorithm is introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithm is based on minimizing the bit error rate (BER) using a fast approximation of its gradient with respect to the equalizer coefficients. This approximation is obtained by estimating the exponential summation in the gradient with only some carefully chosen dominant terms. The paper derives an algorithm to calculate these dominant terms in real-time. Summing only these dominant terms provides a highly accurate approximation of the true gradient. Combined with a fast adaptive channel state estimator, the new equalization algorithm yields better performance than the traditional zero forcing (ZF) or minimum mean square error (MMSE) equalizers. The performance of the new method is tested by simulations performed on standard wireless channels. From the performance analysis one can infer that the new equalizer is capable of efficient channel equalization and maintaining a relatively low bit error probability in the case of channels corrupted by frequency selectivity. Hence, the new algorithm can contribute to ensuring QoS communication over highly distorted channels.

  17. Twenty Questions about Student Errors.

    ERIC Educational Resources Information Center

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    1986-01-01

    Discusses the value of studying errors made by students in the process of learning science. Addresses 20 research questions dealing with student learning errors. Attempts to characterize errors made by students and clarify some terms used in error research. (TW)

  18. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  19. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  20. Teacher-Induced Errors.

    ERIC Educational Resources Information Center

    Richmond, Kent C.

    Students of English as a second language (ESL) often come to the classroom with little or no experience in writing in any language and with inaccurate assumptions about writing. Rather than correct these assumptions, teachers often seem to unwittingly reinforce them, actually inducing errors into their students' work. Teacher-induced errors occur…

  1. Two-fold transmission reach enhancement enabled by transmitter-side digital backpropagation and optical frequency comb-derived information carriers.

    PubMed

    Temprana, E; Myslivets, E; Liu, L; Ataie, V; Wiberg, A; Kuo, B P P; Alic, N; Radic, S

    2015-08-10

    We demonstrate a two-fold reach extension of 16 GBaud 16-Quadrature Amplitude Modulation (QAM) wavelength division multiplexed (WDM) system based on erbium doped fiber amplifier (EDFA)-only amplified standard and single mode fiber -based link. The result is enabled by transmitter-side digital backpropagation and frequency referenced carriers drawn from a parametric comb.

  2. Supernova frequency estimates

    SciTech Connect

    Tsvetkov, D.Y.

    1983-01-01

    Estimates of the frequency of type I and II supernovae occurring in galaxies of different types are derived from observational material acquired by the supernova patrol of the Shternberg Astronomical Institute.

  3. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  4. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  5. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  6. Association Between Grading of Oral Submucous Fibrosis With Frequency and Consumption of Areca Nut and Its Derivatives in a Wide Age Group: A Multi-centric Cross Sectional Study From Karachi, Pakistan

    PubMed Central

    Hosein, Mervyn; Mohiuddin, Sidra; Fatima, Nazish

    2015-01-01

    Background: Oral submucous fibrosis (OSMF) is a chronic, premalignant condition of the oral mucosa and one of the commonest potentially malignant disorders amongst the Asian population. The objective of this study was to investigate the association of etiologic factors with: age, frequency, duration of consumption of areca nut and its derivatives, and the severity of clinical manifestations. Methods: A cross-sectional, multi centric study was conducted over 8 years on clinically diagnosed OSMF cases (n = 765) from both public and private tertiary care centers. Sample size was determined by World Health Organization sample size calculator. Consumption of areca nut in different forms, frequency of daily usage, years of chewing, degree of mouth opening and duration of the condition were recorded. Level of significance was kept at P ≤ 0.05. Results: A total of 765 patients of OSMF were examined, of whom 396 (51.8%) were male and 369 (48.2%) female with a mean age of 29.17 years. Mild OSMF was seen in 61 cases (8.0%), moderate OSMF in 353 (46.1%) and severe OSMF in 417 (54.5%) subjects. Areca nut and other derivatives were most frequently consumed and showed significant risk in the severity of OSMF (P ≤ 0.0001). Age of the sample and duration of chewing years were also significant (P = 0.012). Conclusions: The relative risk of OSMF increased with duration and frequency of areca nut consumption especially from an early age of onset. PMID:26473161

  7. Circulating brain derived neurotrophic factor (BDNF) and frequency of BDNF positive T cells in peripheral blood in human ischemic stroke: Effect on outcome.

    PubMed

    Chan, Adeline; Yan, Jun; Csurhes, Peter; Greer, Judith; McCombe, Pamela

    2015-09-15

    The aim of this study was to measure the levels of circulating BDNF and the frequency of BDNF-producing T cells after acute ischaemic stroke. Serum BDNF levels were measured by ELISA. Flow cytometry was used to enumerate peripheral blood leukocytes that were labelled with antibodies against markers of T cells, T regulatory cells (Tregs), and intracellular BDNF. There was a slight increase in serum BDNF levels after stroke. There was no overall difference between stroke patients and controls in the frequency of CD4(+) and CD8(+) BDNF(+) cells, although a subgroup of stroke patients showed high frequencies of these cells. However, there was an increase in the percentage of BDNF(+) Treg cells in the CD4(+) population in stroke patients compared to controls. Patients with high percentages of CD4(+) BDNF(+) Treg cells had a better outcome at 6months than those with lower levels. These groups did not differ in age, gender or initial stroke severity. Enhancement of BDNF production after stroke could be a useful means of improving neuroprotection and recovery after stroke.

  8. Verb-Form Errors in EAP Writing

    ERIC Educational Resources Information Center

    Wee, Roselind; Sim, Jacqueline; Jusoff, Kamaruzaman

    2010-01-01

    This study was conducted to identify and describe the written verb-form errors found in the EAP writing of 39 second year learners pursuing a three-year Diploma Programme from a public university in Malaysia. Data for this study, which were collected from a written 350-word discursive essay, were analyzed to determine the types and frequency of…

  9. Error Detection Processes during Observational Learning

    ERIC Educational Resources Information Center

    Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.

    2006-01-01

    The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…

  10. The relationship between rate of venous sampling and visible frequency of hormone pulses.

    PubMed

    De Nicolao, G; Guardabasso, V; Rocchetti, M

    1990-11-01

    In this paper, a stochastic model of episodic hormone secretion is used to quantify the effect of the sampling rate on the frequency of pulses that can be detected by objective computer methods in time series of plasma hormone concentrations. Occurrence times of secretion pulses are modeled as recurrent events, with interpulse intervals described by Erlang distributions. In this way, a variety of secretion patterns, ranging from Poisson events to periodic pulses, can be studied. The notion of visible and invisible pulses is introduced and the relationship between true pulses frequency and mean visible pulse frequency is analytically derived. It is shown that a given visible pulse frequency can correspond to two distinct true frequencies. In order to compensate for the 'invisibility error', an algorithm based on the analysis of the original series and its undersampled subsets is proposed and the derived computer program is tested on simulated and clinical data.

  11. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  12. Simulation of probability distributions commonly used in hydrological frequency analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Ke-Sheng; Chiang, Jie-Lun; Hsu, Chieh-Wei

    2007-01-01

    Random variable simulation has been applied to many applications in hydrological modelling, flood risk analysis, environmental impact assessment, etc. However, computer codes for simulation of distributions commonly used in hydrological frequency analysis are not available in most software libraries. This paper presents a frequency-factor-based method for random number generation of five distributions (normal, log-normal, extreme-value type I, Pearson type III and log-Pearson type III) commonly used in hydrological frequency analysis. The proposed method is shown to produce random numbers of desired distributions through three means of validation: (1) graphical comparison of cumulative distribution functions (CDFs) and empirical CDFs derived from generated data; (2) properties of estimated parameters; (3) type I error of goodness-of-fit test. An advantage of the method is that it does not require CDF inversion, and frequency factors of the five commonly used distributions involves only the standard normal deviate. Copyright

  13. Accuracy of Haplotype Frequency Estimation for Biallelic Loci, via the Expectation-Maximization Algorithm for Unphased Diploid Genotype Data

    PubMed Central

    Fallin, Daniele; Schork, Nicholas J.

    2000-01-01

    Haplotype analyses have become increasingly common in genetic studies of human disease because of their ability to identify unique chromosomal segments likely to harbor disease-predisposing genes. The study of haplotypes is also used to investigate many population processes, such as migration and immigration rates, linkage-disequilibrium strength, and the relatedness of populations. Unfortunately, many haplotype-analysis methods require phase information that can be difficult to obtain from samples of nonhaploid species. There are, however, strategies for estimating haplotype frequencies from unphased diploid genotype data collected on a sample of individuals that make use of the expectation-maximization (EM) algorithm to overcome the missing phase information. The accuracy of such strategies, compared with other phase-determination methods, must be assessed before their use can be advocated. In this study, we consider and explore sources of error between EM-derived haplotype frequency estimates and their population parameters, noting that much of this error is due to sampling error, which is inherent in all studies, even when phase can be determined. In light of this, we focus on the additional error between haplotype frequencies within a sample data set and EM-derived haplotype frequency estimates incurred by the estimation procedure. We assess the accuracy of haplotype frequency estimation as a function of a number of factors, including sample size, number of loci studied, allele frequencies, and locus-specific allelic departures from Hardy-Weinberg and linkage equilibrium. We point out the relative impacts of sampling error and estimation error, calling attention to the pronounced accuracy of EM estimates once sampling error has been accounted for. We also suggest that many factors that may influence accuracy can be assessed empirically within a data set—a fact that can be used to create “diagnostics” that a user can turn to for assessing potential

  14. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  15. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  16. More systematic errors in the measurement of power spectral density

    NASA Astrophysics Data System (ADS)

    Mack, Chris A.

    2015-07-01

    Power spectral density (PSD) analysis is an important part of understanding line-edge and linewidth roughness in lithography. But uncertainty in the measured PSD, both random and systematic, complicates interpretation. It is essential to understand and quantify the sources of the measured PSD's uncertainty and to develop mitigation strategies. Both analytical derivations and simulations of rough features are used to evaluate data window functions for reducing spectral leakage and to understand the impact of data detrending on biases in PSD, autocovariance function (ACF), and height-to-height covariance function measurement. A generalized Welch window was found to be best among the windows tested. Linear detrending for line-edge roughness measurement results in underestimation of the low-frequency PSD and errors in the ACF and height-to-height covariance function. Measuring multiple edges per scanning electron microscope image reduces this detrending bias.

  17. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-04-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  18. Surface errors in the course of machining precision optics

    NASA Astrophysics Data System (ADS)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  19. Spatial sampling errors for a satellite-borne scanning radiometer

    NASA Technical Reports Server (NTRS)

    Manalo, Natividad D.; Smith, G. L.

    1991-01-01

    The Clouds and Earth's Radiant Energy System (CERES) scanning radiometer is planned as the Earth radiation budget instrument for the Earth Observation System, to be flown in the late 1990's. In order to minimize the spatial sampling errors of the measurements, it is necessary to select design parameters for the instrument such that the resulting point spread function will minimize spatial sampling errors. These errors are described as aliasing and blurring errors. Aliasing errors are due to presence in the measurements of spatial frequencies beyond the Nyquist frequency, and blurring errors are due to attenuation of frequencies below the Nyquist frequency. The design parameters include pixel shape and dimensions, sampling rate, scan period, and time constants of the measurements. For a satellite-borne scanning radiometer, the pixel footprint grows quickly at large nadir angles. The aliasing errors thus decrease with increasing scan angle, but the blurring errors grow quickly. The best design minimizes the sum of these two errors over a range of scan angles. Results of a parameter study are presented, showing effects of data rates, pixel dimensions, spacecraft altitude, and distance from the spacecraft track.

  20. Gear Transmission Error Measurement System Made Operational

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2002-01-01

    A system directly measuring the transmission error between the meshing spur or helical gears was installed at the NASA Glenn Research Center and made operational in August 2001. This system employs light beams directed by lenses and prisms through gratings mounted on the two gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. The device is capable of resolution better than 0.1 mm (one thousandth the thickness of a human hair). The measured transmission error can be displayed in a "map" that shows how the transmission error varies with the gear rotation or it can be converted to spectra to show the components at the meshing frequencies. Accurate transmission error data will help researchers better understand the mechanisms that cause gear noise and vibration and will lead to The Design Unit at the University of Newcastle in England specifically designed the new system for NASA. It is the only device in the United States that can measure dynamic transmission error at high rotational speeds. The new system will be used to develop new techniques to reduce dynamic transmission error along with the resulting noise and vibration of aeronautical transmissions.

  1. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  2. Rotation of Magnetization Derived from Brownian Relaxation in Magnetic Fluids of Different Viscosity Evaluated by Dynamic Hysteresis Measurements over a Wide Frequency Range.

    PubMed

    Ota, Satoshi; Kitaguchi, Ryoichi; Takeda, Ryoji; Yamada, Tsutomu; Takemura, Yasushi

    2016-09-10

    The dependence of magnetic relaxation on particle parameters, such as the size and anisotropy, has been conventionally discussed. In addition, the influences of external conditions, such as the intensity and frequency of the applied field, the surrounding viscosity, and the temperature on the magnetic relaxation have been researched. According to one of the basic theories regarding magnetic relaxation, the faster type of relaxation dominates the process. However, in this study, we reveal that Brownian and Néel relaxations coexist and that Brownian relaxation can occur after Néel relaxation despite having a longer relaxation time. To understand the mechanisms of Brownian rotation, alternating current (AC) hysteresis loops were measured in magnetic fluids of different viscosities. These loops conveyed the amplitude and phase delay of the magnetization. In addition, the intrinsic loss power (ILP) was calculated using the area of the AC hysteresis loops. The ILP also showed the magnetization response regarding the magnetic relaxation over a wide frequency range. To develop biomedical applications of magnetic nanoparticles, such as hyperthermia and magnetic particle imaging, it is necessary to understand the mechanisms of magnetic relaxation.

  3. Rotation of Magnetization Derived from Brownian Relaxation in Magnetic Fluids of Different Viscosity Evaluated by Dynamic Hysteresis Measurements over a Wide Frequency Range

    PubMed Central

    Ota, Satoshi; Kitaguchi, Ryoichi; Takeda, Ryoji; Yamada, Tsutomu; Takemura, Yasushi

    2016-01-01

    The dependence of magnetic relaxation on particle parameters, such as the size and anisotropy, has been conventionally discussed. In addition, the influences of external conditions, such as the intensity and frequency of the applied field, the surrounding viscosity, and the temperature on the magnetic relaxation have been researched. According to one of the basic theories regarding magnetic relaxation, the faster type of relaxation dominates the process. However, in this study, we reveal that Brownian and Néel relaxations coexist and that Brownian relaxation can occur after Néel relaxation despite having a longer relaxation time. To understand the mechanisms of Brownian rotation, alternating current (AC) hysteresis loops were measured in magnetic fluids of different viscosities. These loops conveyed the amplitude and phase delay of the magnetization. In addition, the intrinsic loss power (ILP) was calculated using the area of the AC hysteresis loops. The ILP also showed the magnetization response regarding the magnetic relaxation over a wide frequency range. To develop biomedical applications of magnetic nanoparticles, such as hyperthermia and magnetic particle imaging, it is necessary to understand the mechanisms of magnetic relaxation. PMID:28335297

  4. Clock error, jitter, phase error, and differential time of arrival in satellite communications

    NASA Astrophysics Data System (ADS)

    Sorace, Ron

    The maintenance of synchronization in satellite communication systems is critical in contemporary systems, since many signal processing and detection algorithms depend on ascertaining time references. Unfortunately, proper synchronism becomes more difficult to maintain at higher frequencies. Factors such as clock error or jitter, noise, and phase error at a coherent receiver may corrupt a transmitted signal and degrade synchronism at the terminations of a communication link. Further, in some systems an estimate of propagation delay is necessary, but this delay may vary stochastically with the range of the link. This paper presents a model of the components of synchronization error including a simple description of clock error and examination of recursive estimation of the propagation delay time for messages between elements in a satellite communication system. Attention is devoted to jitter, the sources of which are considered to be phase error in coherent reception and jitter in the clock itself.

  5. Interpolation Errors in Thermistor Calibration Equations

    NASA Astrophysics Data System (ADS)

    White, D. R.

    2017-04-01

    Thermistors are widely used temperature sensors capable of measurement uncertainties approaching those of standard platinum resistance thermometers. However, the extreme nonlinearity of thermistors means that complicated calibration equations are required to minimize the effects of interpolation errors and achieve low uncertainties. This study investigates the magnitude of interpolation errors as a function of temperature range and the number of terms in the calibration equation. Approximation theory is used to derive an expression for the interpolation error and indicates that the temperature range and the number of terms in the calibration equation are the key influence variables. Numerical experiments based on published resistance-temperature data confirm these conclusions and additionally give guidelines on the maximum and minimum interpolation error likely to occur for a given temperature range and number of terms in the calibration equation.

  6. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  7. A nonmystical treatment of tape speed compensation for frequency modulated signals

    NASA Astrophysics Data System (ADS)

    Solomon, O. M., Jr.

    After briefly reviewing frequency modulation and demodulation, tape speed variation is modeled as a distortion of the independent variable of a frequency-modulated signal. This distortion gives rise to an additive amplitude error in the demodulated message, which comprises two terms. Both terms depend on the derivative of time base error, that is, the flutter of the analog tape machine. It is pointed out that the first term depends on the channel's center frequency and frequency deviation constant, as well as on the flutter, and that the second depends solely on the message and flutter. A description is given of the relationship between the additive amplitude error and manufacturer's flutter specification. For the case of a constant message, relative errors and signal-to-noise ratios are discussed to provide insight into when the variation in tape speed will cause significant errors. An algorithm is then developed which theoretically achieves full compensation of tape speed variation. After being confirmed via spectral computations on laboratory data, the algorithm is applied to field data.

  8. Some Surprising Errors in Numerical Differentiation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2012-01-01

    Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…

  9. Truncation and Accumulated Errors in Wave Propagation

    NASA Astrophysics Data System (ADS)

    Chiang, Yi-Ling F.

    1988-12-01

    The approximation of the truncation and accumulated errors in the numerical solution of a linear initial-valued partial differential equation problem can be established by using a semidiscretized scheme. This error approximation is observed as a lower bound to the errors of a finite difference scheme. By introducing a modified von Neumann solution, this error approximation is applicable to problems with variable coefficients. To seek an in-depth understanding of this newly established error approximation, numerical experiments were performed to solve the hyperbolic equation {∂U}/{∂t} = -C 1(x)C 2(t) {∂U}/{∂x}, with both continuous and discontinuous initial conditions. We studied three cases: (1) C1( x)= C0 and C2( t)=1; (2) C1( x)= C0 and C2( t= t; and (3) C 1(x)=1+( {solx}/{a}) 2 and C2( t)= C0. Our results show that the errors are problem dependent and are functions of the propagating wave speed. This suggests a need to derive problem-oriented schemes rather than the equation-oriented schemes as is commonly done. Furthermore, in a wave-propagation problem, measurement of the error by the maximum norm is not particularly informative when the wave speed is incorrect.

  10. Filter induced errors in laser anemometer measurements using counter processors

    NASA Technical Reports Server (NTRS)

    Oberle, L. G.; Seasholtz, R. G.

    1985-01-01

    Simulations of laser Doppler anemometer (LDA) systems have focused primarily on noise studies or biasing errors. Another possible source of error is the choice of filter types and filter cutoff frequencies. Before it is applied to the counter portion of the signal processor, a Doppler burst is filtered to remove the pedestal and to reduce noise in the frequency bands outside the region in which the signal occurs. Filtering, however, introduces errors into the measurement of the frequency of the input signal which leads to inaccurate results. Errors caused by signal filtering in an LDA counter-processor data acquisition system are evaluated and filters for a specific application which will reduce these errors are chosen.

  11. RbTiOPO4 cascaded Raman operation with multiple Raman frequency shifts derived by Q-switched Nd:YAlO3 laser

    PubMed Central

    Duan, Yanmin; Zhu, Haiyong; Zhang, Yaoju; Zhang, Ge; Zhang, Jian; Tang, Dingyuan; Kaminskii, A. A.

    2016-01-01

    An intra-cavity RbTiOPO4 (RTP) cascade Raman laser was demonstrated for efficient multi-order Stokes emission. An acousto-optic Q-switched Nd:YAlO3 laser at 1.08 μm was used as the pump source and a 20-mm-long x-cut RTP crystal was used as the Raman medium to meet the X(Z,Z)X Raman configuration. Multi-order Stokes with multiple Raman shifts (~271, ~559 and ~687 cm−1) were achieved in the output. Under an incident pump power of 9.5 W, a total average output power of 580 mW with a pulse repetition frequency of 10 kHz was obtained. The optical conversion efficiency is 6.1%. The results show that the RTP crystal can enrich laser spectral lines and generate high order Stokes light. PMID:27666829

  12. Estimation of critical frequency and height maximum for path middle point on evidence derived from experimental oblique sounding data: comparison of calculated values with experimental and IRI values

    NASA Astrophysics Data System (ADS)

    Kim, Anton G.; Kotovich, Galina V.

    2006-11-01

    The work is devoted to experimental checking of technique for estimation of f 0F2 and hmF2 values in path midpoint through oblique sounding (OS) data. In this work data obtained by Irkutsk chirp-sounder on the Norilsk-Irkutsk path were used and data obtained by Podkamennaya Tunguska ionospheric station (which located near estimating path middle point) were used also. During the calculation, the experimental distance-frequency characteristics (DFC) of path are recalculated into height-frequency characteristics (HFC) in path midpoint by means of Smith method. It lets us to determine f 0F2 value in path middle point. For hmF2 definition N(h) profile is used which was obtained by recalculation of HFC by means the Guliaeva technique. Also the fast method of recalculation was probed in two DFC points. In the work comparison was made between calculated f 0F2 values and experimental f 0F2 values obtained by Podkamennaya Tunguska ionospheric station. Comparison of estimating hmF2 values with values calculated by Dudeney method from experimental f 0E, f 0F2, M(3000)F2 values at Podkamennaya Tunguska was carried out. In addition, estimating values was compared with values given by the IRI model. A capability of the IRI model adaptation by f 0F2 and hmF2 values was investigated. It will help during diagnostics, working out regional models of ionosphere and during the adaptation of various models of ionosphere to the real conditions.

  13. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  14. Atmospheric refraction effects on baseline error in satellite laser ranging systems

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Gardner, C. S.

    1982-01-01

    Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.

  15. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  16. Frequency retrace of quartz oscillators

    NASA Astrophysics Data System (ADS)

    Euler, F.; Yannoni, N. F.

    Frequency retrace measurements are reported on oven controlled quartz oscillators utilizing AT and SC cut plated and BVA resonators. Prior to full aging, the retrace error is added to the aging effect. With well-aged resonators, after one or several on-off cycles, the frequency settles at a new level characteristic for intermittent operation. Severe frequency shifts have sometimes been found after the first restart following prolonged continuous operation. SC cut resonators appear to show distinctly smaller retrace errors than AT cut.

  17. Error Sensitivity Model.

    DTIC Science & Technology

    1980-04-01

    Philosophy The Positioning/Error Model has been defined in three dis- tinct phases: I - Error Sensitivity Model II - Operonal Positioning Model III...X inv VH,’itat NX*YImpY -IY+X 364: mat AX+R 365: ara R+L+R 366: if NC1,1J-N[2,2)=O and N[1,2<135+T;j, 6 367: if NC1,1]-N2,2J=6 and NCI2=;0.T;jmp 5

  18. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  19. Measuring Cyclic Error in Laser Heterodyne Interferometers

    NASA Technical Reports Server (NTRS)

    Ryan, Daniel; Abramovici, Alexander; Zhao, Feng; Dekens, Frank; An, Xin; Azizi, Alireza; Chapsky, Jacob; Halverson, Peter

    2010-01-01

    An improved method and apparatus have been devised for measuring cyclic errors in the readouts of laser heterodyne interferometers that are configured and operated as displacement gauges. The cyclic errors arise as a consequence of mixing of spurious optical and electrical signals in beam launchers that are subsystems of such interferometers. The conventional approach to measurement of cyclic error involves phase measurements and yields values precise to within about 10 pm over air optical paths at laser wavelengths in the visible and near infrared. The present approach, which involves amplitude measurements instead of phase measurements, yields values precise to about .0.1 microns . about 100 times the precision of the conventional approach. In a displacement gauge of the type of interest here, the laser heterodyne interferometer is used to measure any change in distance along an optical axis between two corner-cube retroreflectors. One of the corner-cube retroreflectors is mounted on a piezoelectric transducer (see figure), which is used to introduce a low-frequency periodic displacement that can be measured by the gauges. The transducer is excited at a frequency of 9 Hz by a triangular waveform to generate a 9-Hz triangular-wave displacement having an amplitude of 25 microns. The displacement gives rise to both amplitude and phase modulation of the heterodyne signals in the gauges. The modulation includes cyclic error components, and the magnitude of the cyclic-error component of the phase modulation is what one needs to measure in order to determine the magnitude of the cyclic displacement error. The precision attainable in the conventional (phase measurement) approach to measuring cyclic error is limited because the phase measurements are af-

  20. Optical linear algebra processors - Noise and error-source modeling

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  1. Optical linear algebra processors: noise and error-source modeling.

    PubMed

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  2. Parental Reports of Children's Scale Errors in Everyday Life

    ERIC Educational Resources Information Center

    Rosengren, Karl S.; Gutierrez, Isabel T.; Anderson, Kathy N.; Schein, Stevie S.

    2009-01-01

    Scale errors refer to behaviors where young children attempt to perform an action on an object that is too small to effectively accommodate the behavior. The goal of this study was to examine the frequency and characteristics of scale errors in everyday life. To do so, the researchers collected parental reports of children's (age range = 13-21…

  3. Speech Errors in Progressive Non-Fluent Aphasia

    ERIC Educational Resources Information Center

    Ash, Sharon; McMillan, Corey; Gunawardena, Delani; Avants, Brian; Morgan, Brianna; Khan, Alea; Moore, Peachie; Gee, James; Grossman, Murray

    2010-01-01

    The nature and frequency of speech production errors in neurodegenerative disease have not previously been precisely quantified. In the present study, 16 patients with a progressive form of non-fluent aphasia (PNFA) were asked to tell a story from a wordless children's picture book. Errors in production were classified as either phonemic,…

  4. Comparison of voice relative fundamental frequency estimates derived from an accelerometer signal and low-pass filtered and unprocessed microphone signals.

    PubMed

    Lien, Yu-An S; Stepp, Cara E

    2014-05-01

    The relative fundamental frequency (RFF) surrounding the production of a voiceless consonant has previously been estimated using unprocessed and low-pass filtered microphone signals, but it can also be estimated using a neck-placed accelerometer signal that is less affected by vocal tract formants. Determining the effects of signal type on RFF will allow for comparisons across studies and aid in establishing a standard protocol with minimal within-speaker variability. Here RFF was estimated in 12 speakers with healthy voices using unprocessed microphone, low-pass filtered microphone, and unprocessed accelerometer signals. Unprocessed microphone and accelerometer signals were recorded simultaneously using a microphone and neck-placed accelerometer. The unprocessed microphone signal was filtered at 350 Hz to construct the low-pass filtered microphone signal. Analyses of variance showed that signal type and the interaction of vocal cycle × signal type had significant effects on both RFF means and standard deviations, but with small effect sizes. The overall RFF trend was preserved regardless of signal type and the intra-speaker variability of RFF was similar among the signal types. Thus, RFF can be estimated using either a microphone or an accelerometer signal in individuals with healthy voices. Future work extending these findings to individuals with disordered voices is warranted.

  5. Cellular change and callose accumulation in zygotic embryos of Eleutherococcus senticosus caused by plasmolyzing pretreatment result in high frequency of single-cell-derived somatic embryogenesis.

    PubMed

    Ling You, Xiang; Seon Yi, Jae; Eui Choi, Yong

    2006-05-01

    Eleutherococcus senticosus zygotic embryos were pretreated with 1.0 M mannitol or sucrose for 3-24 h. This pretreatment resulted in a high frequency of somatic-embryo formation on hormone-free medium. All the somatic embryos developed directly and independently from single epidermal cells on the surface of zygotic embryos after plasmolyzing pretreatment. Scanning electron microscopic observation revealed that the epidermal cells of hypocotyls rapidly became irregular and showed a random orientation before somatic-embryo development commenced. At the same time, the epidermal cells in the untreated control remained regular. Callose concentration determined by fluorometric analysis increased sharply in E. senticosus zygotic embryos after plasmolyzing pretreatment but remained low in the untreated control. Aniline blue fluorescent staining of callose showed that the plasmolyzing pretreatment of zygotic embryos resulted in heavy accumulation of callose between the plasma membrane and cell walls. On the basis of these results, we suggest that plasmolyzing pretreatment of zygotic embryos induces the accumulation of callose, and the interruption of cell-to-cell communication imposed by this might stimulate the reprogramming of epidermal cells into embryogenically competent cells and finally induce somatic-embryo development from single cells.

  6. The notion of error in Langevin dynamics. I. Linear analysis

    NASA Astrophysics Data System (ADS)

    Mishra, Bimal; Schlick, Tamar

    1996-07-01

    The notion of error in practical molecular and Langevin dynamics simulations of large biomolecules is far from understood because of the relatively large value of the timestep used, the short simulation length, and the low-order methods employed. We begin to examine this issue with respect to equilibrium and dynamic time-correlation functions by analyzing the behavior of selected implicit and explicit finite-difference algorithms for the Langevin equation. We derive: local stability criteria for these integrators; analytical expressions for the averages of the potential, kinetic, and total energy; and various limiting cases (e.g., timestep and damping constant approaching zero), for a system of coupled harmonic oscillators. These results are then compared to the corresponding exact solutions for the continuous problem, and their implications to molecular dynamics simulations are discussed. New concepts of practical and theoretical importance are introduced: scheme-dependent perturbative damping and perturbative frequency functions. Interesting differences in the asymptotic behavior among the algorithms become apparent through this analysis, and two symplectic algorithms, ``LIM2'' (implicit) and ``BBK'' (explicit), appear most promising on theoretical grounds. One result of theoretical interest is that for the Langevin/implicit-Euler algorithm (``LI'') there exist timesteps for which there is neither numerical damping nor shift in frequency for a harmonic oscillator. However, this idea is not practical for more complex systems because these special timesteps can account only for one frequency of the system, and a large damping constant is required. We therefore devise a more practical, delay-function approach to remove the artificial damping and frequency perturbation from LI. Indeed, a simple MD implementation for a system of coupled harmonic oscillators demonstrates very satisfactory results in comparison with the velocity-Verlet scheme. We also define a

  7. Parallel systems of error processing in the brain.

    PubMed

    Yordanova, Juliana; Falkenstein, Michael; Hohnsbein, Joachim; Kolev, Vasil

    2004-06-01

    Major neurophysiological principles of performance monitoring are not precisely known. It is a current debate in cognitive neuroscience if an error-detection neural system is involved in behavioral control and adaptation. Such a system should generate error-specific signals, but their existence is questioned by observations that correct and incorrect reactions may elicit similar neuroelectric potentials. A new approach based on a time-frequency decomposition of event-related brain potentials was applied to extract covert sub-components from the classical error-related negativity (Ne) and correct-response-related negativity (Nc) in humans. A unique error-specific sub-component from the delta (1.5-3.5 Hz) frequency band was revealed only for Ne, which was associated with error detection at the level of overall performance monitoring. A sub-component from the theta frequency band (4-8 Hz) was associated with motor response execution, but this sub-component also differentiated error from correct reactions indicating error detection at the level of movement monitoring. It is demonstrated that error-specific signals do exist in the brain. More importantly, error detection may occur in multiple functional systems operating in parallel at different levels of behavioral control.

  8. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  9. Application of low-frequency alternating current electric fields via interdigitated electrodes: effects on cellular viability, cytoplasmic calcium, and osteogenic differentiation of human adipose-derived stem cells.

    PubMed

    McCullen, Seth D; McQuilling, John P; Grossfeld, Robert M; Lubischer, Jane L; Clarke, Laura I; Loboa, Elizabeth G

    2010-12-01

    Electric stimulation is known to initiate signaling pathways and provides a technique to enhance osteogenic differentiation of stem and/or progenitor cells. There are a variety of in vitro stimulation devices to apply electric fields to such cells. Herein, we describe and highlight the use of interdigitated electrodes to characterize signaling pathways and the effect of electric fields on the proliferation and osteogenic differentiation of human adipose-derived stem cells (hASCs). The advantage of the interdigitated electrode configuration is that cells can be easily imaged during short-term (acute) stimulation, and this identical configuration can be utilized for long-term (chronic) studies. Acute exposure of hASCs to alternating current (AC) sinusoidal electric fields of 1 Hz induced a dose-dependent increase in cytoplasmic calcium in response to electric field magnitude, as observed by fluorescence microscopy. hASCs that were chronically exposed to AC electric field treatment of 1 V/cm (4 h/day for 14 days, cultured in the osteogenic differentiation medium containing dexamethasone, ascorbic acid, and β-glycerol phosphate) displayed a significant increase in mineral deposition relative to unstimulated controls. This is the first study to evaluate the effects of sinusoidal AC electric fields on hASCs and to demonstrate that acute and chronic electric field exposure can significantly increase intracellular calcium signaling and the deposition of accreted calcium under osteogenic stimulation, respectively.

  10. Synthesis, growth, optical and DFT calculation of 2-naphthol derived Mannich base organic non linear optical single crystal for frequency conversion applications

    NASA Astrophysics Data System (ADS)

    Raj, A. Dennis; Jeeva, M.; Shankar, M.; Purusothaman, R.; Prabhu, G. Venkatesa; Potheher, I. Vetha

    2016-11-01

    2-naphthol derived Mannich base 1-((4-methylpiperazin-1-yl) (phenyl) methyl) naphthalen-2-ol (MPN) - a nonlinear optical single crystal was synthesized and successfully grown by slow evaporation technique at room temperature. The molecular structure was confirmed by single crystal XRD, FT-IR, 1H NMR and 13C NMR spectral studies. The single crystal X-ray diffraction analysis reveals that the crystal belongs to orthorhombic crystal system with non-centrosymmetric space group Pna21. The chemical shift of 5.34 ppm (singlet methine CH proton) in 1H NMR and signal for the CH carbon around δ70.169 ppm in 13C NMR confirms the formation of the title compound. The crystal growth pattern and dislocations of crystal are analyzed using chemical etching technique. UV cut off wavelength of the material was found to be 212 nm. The second harmonic generation (SHG) of MPN was determined from Kurtz Perry powder technique and the efficiency is almost equal to that of standard KDP crystal. The laser damage threshold was measured by passing Nd: YAG laser beam through the sample and it was found to be 1.1974 GW/cm2. The material was thermally stable up to 142 °C. The relationship between the molecular structure and the optical properties was also studied from quantum chemical calculations using Density Functional Theory (DFT) and reported for the first time.

  11. Effect of 50 Hz Extremely Low-Frequency Electromagnetic Fields on the DNA Methylation and DNA Methyltransferases in Mouse Spermatocyte-Derived Cell Line GC-2.

    PubMed

    Liu, Yong; Liu, Wen-bin; Liu, Kai-jun; Ao, Lin; Zhong, Julia Li; Cao, Jia; Liu, Jin-yi

    2015-01-01

    Previous studies have shown that the male reproductive system is one of the most sensitive organs to electromagnetic radiation. However, the biological effects and molecular mechanism are largely unclear. Our study was designed to elucidate the epigenetic effects of 50 Hz ELF-EMF in vitro. Mouse spermatocyte-derived GC-2 cell line was exposed to 50 Hz ELF-EMF (5 min on and 10 min off) at magnetic field intensity of 1 mT, 2 mT, and 3 mT with an intermittent exposure for 72 h. We found that 50 Hz ELF-EMF exposure decreased genome-wide methylation at 1 mT, but global methylation was higher at 3 mT compared with the controls. The expression of DNMT1 and DNMT3b was decreased at 1 mT, and 50 Hz ELF-EMF can increase the expression of DNMT1 and DNMT3b of GC-2 cells at 3 mT. However, 50 Hz ELF-EMF had little influence on the expression of DNMT3a. Then, we established DNA methylation and gene expression profiling and validated some genes with aberrant DNA methylation and expression at different intensity of 50 Hz ELF-EMF. These results suggest that the alterations of genome-wide methylation and DNMTs expression may play an important role in the biological effects of 50 Hz ELF-EMF exposure.

  12. TH-9 (a theophylline derivative) induces long-lasting enhancement in excitatory synaptic transmission in the rat hippocampus that is occluded by frequency-dependent plasticity in vitro.

    PubMed

    Nashawi, H; Bartl, T; Bartl, P; Novotny, L; Oriowo, M A; Kombian, S B

    2012-09-18

    Dementia, especially Alzheimer's disease, is a rapidly increasing medical condition that presents with enormous challenge for treatment. It is characterized by impairment in memory and cognitive function often accompanied by changes in synaptic transmission and plasticity in relevant brain regions such as the hippocampus. We recently synthesized TH-9, a conjugate racetam-methylxanthine compound and tested if it had potential for enhancing synaptic function and possibly, plasticity, by examining its effect on hippocampal fast excitatory synaptic transmission and plasticity. Field excitatory postsynaptic potentials (fEPSPs) were recorded in the CA1 hippocampal area of naïve juvenile male Sprague-Dawley rats using conventional electrophysiological recording techniques. TH-9 caused a concentration-dependent, long-lasting enhancement in fEPSPs. This effect was blocked by adenosine A1, acetylcholine (muscarinic and nicotinic) and glutamate (N-methyl-d-aspartate) receptor antagonists but not by a γ-aminobutyric acid receptor type B (GABA(B)) receptor antagonist. The TH-9 effect was also blocked by enhancing intracellular cyclic adenosine monophosphate and inhibiting protein kinase A. Pretreatment with TH-9 did not prevent the induction of long-term potentiation (LTP) or long-term depression (LTD). Conversely, induction of LTP or LTD completely occluded the ability of TH-9 to enhance fEPSPs. Thus, TH-9 utilizes cholinergic and adenosinergic mechanisms to cause long-lasting enhancement in fEPSPs which were occluded by LTP and LTD. TH-9 may therefore employ similar or convergent mechanisms with frequency-dependent synaptic plasticities to produce the observed long-lasting enhancement in synaptic transmission and may thus, have potential for use in improving memory.

  13. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  14. A analytical method to low-low satellite-to-satellite tracking (ll-SST) error analysis

    NASA Astrophysics Data System (ADS)

    Cai, Lin; Zhou, Zebing; Bai, Yanzheng

    2014-05-01

    The conventional methods of error analysis for low-low satellite-to-satellite tracking (ll-SST) missions are mainly based on least-squares (LS) method, which addresses the whole effect of measurement errors and estimate the resolution of gravity field models mainly from a numerical point of view. A direct analytical expression between power spectral density of the ll-SST measurements and spherical harmonic coefficients of the Earth's gravity model is derived based on the relationship between temporal frequencies and sphere harmonics. In this study much effort has been put into the establishment of the observation equation, which derived from the linear perturbations theory and control theory, and the computation of the average power acceleration in the north direction with respect to a local north-oriented frame, which relates to the orthonormalization of derivatives of the Legendre functions. This method provides a physical insight into the relation between mission parameters, instrument parameters and gravity field parameters. In contrast, the least-squares method is mainly based on a mathematical viewpoint. The result explicitly expresses the relationship, which enables us to estimate the parameters of ll-SST missions quantitatively and directly, especially for analyzing the frequency characteristics of measurement noise. By taking advantage of the analytical expression, we discuss the effects of range, range-rate and non-conservative forces measurements errors on the gravity field recovery.

  15. Analysis of PolSK based FSO system using wavelength and time diversity over strong atmospheric turbulence with pointing errors

    NASA Astrophysics Data System (ADS)

    Prabu, K.; Cheepalli, Shashidhar; Kumar, D. Sriram

    2014-08-01

    Free space optics (FSO) or wireless optical communication systems is an evolving alternative to the current radio frequency (RF) links due to its high and secure datarates, large license free bandwidth access, ease of installation, and lower cost for shorter range distances. These systems are largely influenced by atmospheric conditions due to wireless transmission; requirement of line of sight (LOS) propagation may lead to alignment problems in turn pointing errors. In this paper, we consider atmospheric turbulence and pointing errors are the major limitations. We tried to address these difficulties by considering polarization shift keying (PolSK) modulated FSO communication system with wavelength and time diversity. We derived the closed form expressions for estimation of the average bit error rate (BER) and outage probability, which are vital system performance metrics. Analytical results are shown considering different practical cases.

  16. Frequency-wavenumber spectrum for GATE phase I rainfields

    NASA Technical Reports Server (NTRS)

    Nakamoto, Shoichiro; Valdes, Juan B.; North, Gerald R.

    1990-01-01

    The oceanic rainfall frequency-wavenumber spectrum and its associated space-time correlation have been evaluated from subsets of GATE phase I data. The records, of a duration of four days, were sampled at 15 minutes intervals in 4 x 4 km grid boxes over a 400 km diameter hexagon. In the low frequencies-low wavenumber region the results coincide with those obtained by using the stochastic model proposed by North and Nakomoto (1989). From the derived spectrum the inherent time and space scales of the stochastic model were determined to be approximately 13 hours and 36 km. The formalism proposed by North and Nakamoto was taken together with the derived spectrum to compute the mean square sampling error due to intermittent visits of a spaceborne sensor.

  17. (Errors in statistical tests)3.

    PubMed

    Phillips, Carl V; MacLehose, Richard F; Kaufman, Jay S

    2008-07-14

    departure from uniformity, not just its test statistics. We found variation in digit frequencies in the additional data and describe the distinctive pattern of these results. Furthermore, we found that the combined data diverge unambiguously from a uniform distribution. The explanation for this divergence seems unlikely to be that suggested by the previous authors: errors in calculations and transcription.

  18. Effect of 50 Hz Extremely Low-Frequency Electromagnetic Fields on the DNA Methylation and DNA Methyltransferases in Mouse Spermatocyte-Derived Cell Line GC-2

    PubMed Central

    Liu, Yong; Liu, Wen-bin; Liu, Kai-jun; Ao, Lin; Zhong, Julia Li; Cao, Jia; Liu, Jin-yi

    2015-01-01

    Previous studies have shown that the male reproductive system is one of the most sensitive organs to electromagnetic radiation. However, the biological effects and molecular mechanism are largely unclear. Our study was designed to elucidate the epigenetic effects of 50 Hz ELF-EMF in vitro. Mouse spermatocyte-derived GC-2 cell line was exposed to 50 Hz ELF-EMF (5 min on and 10 min off) at magnetic field intensity of 1 mT, 2 mT, and 3 mT with an intermittent exposure for 72 h. We found that 50 Hz ELF-EMF exposure decreased genome-wide methylation at 1 mT, but global methylation was higher at 3 mT compared with the controls. The expression of DNMT1 and DNMT3b was decreased at 1 mT, and 50 Hz ELF-EMF can increase the expression of DNMT1 and DNMT3b of GC-2 cells at 3 mT. However, 50 Hz ELF-EMF had little influence on the expression of DNMT3a. Then, we established DNA methylation and gene expression profiling and validated some genes with aberrant DNA methylation and expression at different intensity of 50 Hz ELF-EMF. These results suggest that the alterations of genome-wide methylation and DNMTs expression may play an important role in the biological effects of 50 Hz ELF-EMF exposure. PMID:26339596

  19. Perceptual bias in speech error data collection: insights from Spanish speech errors.

    PubMed

    Pérez, Elvira; Santiago, Julio; Palma, Alfonso; O'Seaghdha, Padraig G

    2007-05-01

    This paper studies the reliability and validity of naturalistic speech errors as a tool for language production research. Possible biases when collecting naturalistic speech errors are identified and specific predictions derived. These patterns are then contrasted with published reports from Germanic languages (English, German and Dutch) and one Romance language (Spanish). Unlike findings in the Germanic languages, Spanish speech errors show many patterns which run contrary to those expected from bias: (1) more phonological errors occur between words than within word; (2) word-initial consonants are less likely to participate in errors than word-medial consonants, (3) errors are equally likely in stressed and in unstressed syllables, (4) perseverations are more frequent than anticipations, and (5) there is no trace of a lexical bias. We present a new corpus of Spanish speech errors collected by many theoretically naïve observers (whereas the only corpus available so far was collected by two highly trained theoretically informed observers), give a general overview of it, and use it to replicate previous reports. In spite of the different susceptibility of these methods to bias, results were remarkably similar in both corpora and again contrary to predictions from bias. As a result, collecting speech errors "in the wild" seems to be free of bias to a reasonable extent even when using a multiple-collector method. The observed contrasting patterns between Spanish and Germanic languages arise as true cross-linguistic differences.

  20. Report of the Subpanel on Error Characterization and Error Budgets

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state of knowledge of both user positioning requirements and error models of current and proposed satellite systems is reviewed. In particular the error analysis models for LANDSAT D are described. Recommendations are given concerning the geometric error model for the thematic mapper; interactive user involvement in system error budgeting and modeling and verification on real data sets; and the identification of a strawman mission for modeling key error sources.

  1. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  2. Errors from Rayleigh-Jeans approximation in satellite microwave radiometer calibration systems.

    PubMed

    Weng, Fuzhong; Zou, Xiaolei

    2013-01-20

    The advanced technology microwave sounder (ATMS) onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite is a total power radiometer and scans across the track within a range of ±52.77° from nadir. It has 22 channels and measures the microwave radiation at either quasi-vertical or quasi-horizontal polarization from the Earth's atmosphere. The ATMS sensor data record algorithm employed a commonly used two-point calibration equation that derives the earth-view brightness temperature directly from the counts and temperatures of warm target and cold space, and the earth-scene count. This equation is only valid under Rayleigh-Jeans (RJ) approximation. Impacts of RJ approximation on ATMS calibration biases are evaluated in this study. It is shown that the RJ approximation used in ATMS radiometric calibration results in errors on the order of 1-2 K. The error is also scene count dependent and increases with frequency.

  3. Sensitivity analysis of FBMC-based multi-cellular networks to synchronization errors and HPA nonlinearities

    NASA Astrophysics Data System (ADS)

    Elmaroud, Brahim; Faqihi, Ahmed; Aboutajdine, Driss

    2017-01-01

    In this paper, we study the performance of asynchronous and nonlinear FBMC-based multi-cellular networks. The considered system includes a reference mobile perfectly synchronized with its reference base station (BS) and K interfering BSs. Both synchronization errors and high-power amplifier (HPA) distortions will be considered and a theoretical analysis of the interference signal will be conducted. On the basis of this analysis, we will derive an accurate expression of signal-to-noise-plus-interference ratio (SINR) and bit error rate (BER) in the presence of a frequency-selective channel. In order to reduce the computational complexity of the BER expression, we applied an interesting lemma based on the moment generating function of the interference power. Finally, the proposed model is evaluated through computer simulations which show a high sensitivity of the asynchronous FBMC-based multi-cellular network to HPA nonlinear distortions.

  4. Delay compensation - Its effect in reducing sampling errors in Fourier spectroscopy

    NASA Technical Reports Server (NTRS)

    Zachor, A. S.; Aaronson, S. M.

    1979-01-01

    An approximate formula is derived for the spectrum ghosts caused by periodic drive speed variations in a Michelson interferometer. The solution represents the case of fringe-controlled sampling and is applicable when the reference fringes are delayed to compensate for the delay introduced by the electrical filter in the signal channel. Numerical results are worked out for several common low-pass filters. It is shown that the maximum relative ghost amplitude over the range of frequencies corresponding to the lower half of the filter band is typically 20 times smaller than the relative zero-to-peak velocity error, when delayed sampling is used. In the lowest quarter of the filter band it is more than 100 times smaller than the relative velocity error. These values are ten and forty times smaller, respectively, than they would be without delay compensation if the filter is a 6-pole Butterworth.

  5. Stabilized fiber-optic frequency distribution system

    NASA Technical Reports Server (NTRS)

    Primas, L. E.; Lutes, G. F.; Sydnor, R. L.

    1989-01-01

    A technique for stabilizing reference frequencies transmitted over fiber-optic cable in a frequency distribution system is discussed. The distribution system utilizes fiber-optic cable as the transmission medium to distribute precise reference signals from a frequency standard to remote users. The stability goal of the distribution system is to transmit a 100-MHz signal over a 22-km fiber-optic cable and maintain a stability of 1 part in 10(17) for 1000-second averaging times. Active stabilization of the link is required to reduce phase variations produced by environmental effects, and is achieved by transmitting the reference signal from the frequency standard to the remote unit and then reflecting back to the reference unit over the same optical fiber. By comparing the phase of the transmitted and reflected signals at the reference unit, phase variations of the remote signal can be measured. An error voltage derived from the phase difference between the two signals is used to add correction phase.

  6. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  7. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  8. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  9. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  10. Combining temporally-integrated heat stress duration and frequency with multi-dimensional vulnerability characteristics to derive local-level risk patterns

    NASA Astrophysics Data System (ADS)

    Özceylan, Dilek; Aubrecht, Christoph

    2013-04-01

    The observed changing in the nature of climate related events and the increase in the number and severity of extreme weather events has been changing risk patterns and puts more people at risk. In recent years extreme heat events caused excess mortality and public concerns in many regions of the world (e.g., 2003 and 2006 Western European heat waves, 2007 and 2010 Asian heat waves, 2006 and most recent 2010-2012 North American heat waves). In the United States extreme heat events have been consistently reported as the leading cause of weather- related mortality and have attracted the attention of the international scientific community regarding the critical importance of risk assessment and decoding its components for risk reduction. In order to understand impact potentials and analyze risk in its individual components both the spatially and temporally varying patterns of heat stress and the multidimensional characteristics of vulnerability have to be considered. In this study we present a composite risk index aggregating these factors and implement that for the U.S. National Capital Region on a high level of spatial detail. The applied measure of assessing heat stress hazard is a novel approach of integrating magnitude, duration, and frequency over time in the assessment and is opposed to the study of single extreme events and the analysis of mere absolute numbers of heat waves that are independent of the length of the respective events. On the basis of heat related vulnerability conceptualization, we select various population and land cover characteristics in our study area and define a composite vulnerability index based on aggregation of three groups of indicators related to demographic, socio-economic, and environmental factors. The study reveals how risk patterns seem to be driven by the vulnerability distribution, generally showing a clear difference between high-risk urban areas and wide areas of low risk in the sub-urban and rural environments. This is

  11. Performance analysis of free-space optical systems employing binary polarization shift keying signaling over gamma-gamma channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Krishnan, Prabu; Kumar, D. Sriram

    2014-07-01

    Free-space optics (FSO) has become one of the prominent solutions to the bandwidth limitation of radio frequency links. But, the FSO system performance is purely dependent on atmospheric conditions. Atmospheric turbulence, scintillation, and pointing errors are the major deteriorations in FSO communications. Multiple input multiple output (MIMO) or spatial diversity structure can improve the performance of the FSO system. The bit error rate (BER) performance of the MIMO FSO system employing binary polarization shift keying using optimal combining is analyzed. We have derived a closed form expression for the BER of the system, and the results are compared against the single input single output system.

  12. Implications of Error Analysis Studies for Academic Interventions

    ERIC Educational Resources Information Center

    Mather, Nancy; Wendling, Barbara J.

    2017-01-01

    We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…

  13. Modeling of Present-Day Atmosphere and Ocean Non-Tidal De-Aliasing Errors for Future Gravity Mission Simulations

    NASA Astrophysics Data System (ADS)

    Bergmann-Wolf, I.; Dobslaw, H.; Mayer-Gürr, T.

    2015-12-01

    A realistically perturbed synthetic de-aliasing model consistent with the updated Earth System Model of the European Space Agency (Dobslaw et al., 2015) is now available for the years 1995 -- 2006. The data-set contains realizations of (i) errors at large spatial scales assessed individually for periods between 10 -- 30, 3 -- 10, and 1 -- 3 days, the S1 atmospheric tide, and sub-diurnal periods; (ii) errors at small spatial scales typically not covered by global models of atmosphere and ocean variability; and (iii) errors due to physical processes not represented in currently available de-aliasing products. The error magnitudes for each of the different frequency bands are derived from a small ensemble of four atmospheric and oceanic models. In order to demonstrate the plausibility of the error magnitudes chosen, we perform a variance component estimation based on daily GRACE normal equations from the ITSG-Grace2014 global gravity field series recently published by the University of Graz. All 12 years of the error model are used to calculate empirical error variance-covariance matrices describing the systematic dependencies of the errors both in time and in space individually for five continental and four oceanic regions, and daily GRACE normal equations are subsequently employed to obtain pre-factors for each of those matrices. For the largest spatial scales up to d/o = 40 and periods longer than 24 h, errors prepared for the updated ESM are found to be largely consistent with noise of a similar stochastic character contained in present-day GRACE solutions. Differences and similarities identified for all of the nine regions considered will be discussed in detail during the presentation.Dobslaw, H., I. Bergmann-Wolf, R. Dill, E. Forootan, V. Klemann, J. Kusche, and I. Sasgen (2015), The updated ESA Earth System Model for future gravity mission simulation studies, J. Geod., doi:10.1007/s00190-014-0787-8.

  14. Pulse Shaping Entangling Gates and Error Supression

    NASA Astrophysics Data System (ADS)

    Hucul, D.; Hayes, D.; Clark, S. M.; Debnath, S.; Quraishi, Q.; Monroe, C.

    2011-05-01

    Control of spin dependent forces is important for generating entanglement and realizing quantum simulations in trapped ion systems. Here we propose and implement a composite pulse sequence based on the Molmer-Sorenson gate to decrease gate infidelity due to frequency and timing errors. The composite pulse sequence uses an optical frequency comb to drive Raman transitions simultaneously detuned from trapped ion transverse motional red and blue sideband frequencies. The spin dependent force displaces the ions in phase space, and the resulting spin-dependent geometric phase depends on the detuning. Voltage noise on the rf electrodes changes the detuning between the trapped ions' motional frequency and the laser, decreasing the fidelity of the gate. The composite pulse sequence consists of successive pulse trains from counter-propagating frequency combs with phase control of the microwave beatnote of the lasers to passively suppress detuning errors. We present the theory and experimental data with one and two ions where a gate is performed with a composite pulse sequence. This work supported by the U.S. ARO, IARPA, the DARPA OLE program, the MURI program; the NSF PIF Program; the NSF Physics Frontier Center at JQI; the European Commission AQUTE program; and the IC postdoc program administered by the NGA.

  15. Existence of solutions for electron balance problem in the stationary radio-frequency induction discharges

    NASA Astrophysics Data System (ADS)

    Zheltukhin, V. S.; Solovyev, S. I.; Solovyev, P. S.; Chebakova, V. Yu

    2016-11-01

    A sufficient condition for the existence of a minimal eigenvalue corresponding to a positive eigenfunction of an eigenvalue problem with nonlinear dependence on the parameter for a second order ordinary differential equation is established. The initial problem is approximated by the finite element method. Error estimates for the approximate minimal eigenvalue and corresponding positive eigenfunction are derived. Problems of this form arise in modelling the plasma of a radio-frequency discharge at reduced pressure.

  16. A rapid method for obtaining frequency-response functions for multiple input photogrammetric data

    NASA Technical Reports Server (NTRS)

    Kroen, M. L.; Tripp, J. S.

    1984-01-01

    A two-digital-camera photogrammetric technique for measuring the motion of a vibrating spacecraft structure or wing surface and an applicable data-reduction algorithm are presented. The 3D frequency-response functions are obtained by coordinate transformation from averaged cross and autopower spectra derived from the 4D camera coordinates by Fourier transformation. Error sources are investigated analytically, and sample results are shown in graphs.

  17. Phase errors due to speckles in laser fringe projection.

    PubMed

    Rosendahl, Sara; Hällstig, Emil; Gren, Per; Sjödahl, Mikael

    2010-04-10

    When measuring a three-dimensional shape with triangulation and projected interference fringes it is of interest to reduce speckle contrast without destroying the coherence of the projected light. A moving aperture is used to suppress the speckles and thereby reduce the phase error in the fringe image. It is shown that the phase error depends linearly on the ratio between the speckle contrast and the modulation of the fringes. In this investigation the spatial carrier method was used to extract the phase, where the phase error also depends on filtering the Fourier spectrum. An analytical expression for the phase error is derived. Both the speckle reduction and the theoretical expressions for the phase error are verified by simulations and experiments. It was concluded that a movement of the aperture by three aperture diameters during exposure of the image reduces the speckle contrast and hence the phase error by 60%. In the experiments, a phase error of 0.2 rad was obtained.

  18. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  19. Hyponatremia: management errors.

    PubMed

    Seo, Jang Won; Park, Tae Jin

    2006-11-01

    Rapid correction of hyponatremia is frequently associated with increased morbidity and mortality. Therefore, it is important to estimate the proper volume and type of infusate required to increase the serum sodium concentration predictably. The major common management errors during the treatment of hyponatremia are inadequate investigation, treatment with fluid restriction for diuretic-induced hyponatremia and treatment with fluid restriction plus intravenous isotonic saline simultaneously. We present two cases of management errors. One is about the problem of rapid correction of hyponatremia in a patient with sepsis and acute renal failure during continuous renal replacement therapy in the intensive care unit. The other is the case of hypothyroidism in which hyponatremia was aggravated by intravenous infusion of dextrose water and isotonic saline infusion was erroneously used to increase serum sodium concentration.

  20. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  1. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  2. Estimation of Aerodynamic Stability Derivatives for Space Launch System and Impact on Stability Margins

    NASA Technical Reports Server (NTRS)

    Pei, Jing; Wall, John

    2013-01-01

    This paper describes the techniques involved in determining the aerodynamic stability derivatives for the frequency domain analysis of the Space Launch System (SLS) vehicle. Generally for launch vehicles, determination of the derivatives is fairly straightforward since the aerodynamic data is usually linear through a moderate range of angle of attack. However, if the wind tunnel data lacks proper corrections then nonlinearities and asymmetric behavior may appear in the aerodynamic database coefficients. In this case, computing the derivatives becomes a non-trivial task. Errors in computing the nominal derivatives could lead to improper interpretation regarding the natural stability of the system and tuning of the controller parameters, which would impact both stability and performance. The aerodynamic derivatives are also provided at off nominal operating conditions used for dispersed frequency domain Monte Carlo analysis. Finally, results are shown to illustrate that the effects of aerodynamic cross axis coupling can be neglected for the SLS configuration studied

  3. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  4. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  5. Surface temperature measurement errors

    SciTech Connect

    Keltner, N.R.; Beck, J.V.

    1983-05-01

    Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.

  6. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  7. Influence of modulation frequency in rubidium cell frequency standards

    NASA Technical Reports Server (NTRS)

    Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.

    1983-01-01

    The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.

  8. Structural Damage Detection Using Frequency Domain Error Localization.

    DTIC Science & Technology

    1994-12-01

    113 rn ~l-,I T X ~oy Ul C 114 APPENDIX D. FE MODEL / COMPUTER CODES The following is a brief description of MATLAB routines employed in this thesis...R.R., Structural Dynamics, An Introduction to Computer Methods , pp. 383-387, John Wiley and Sons, Inc., 1981. 8. Guyan , R.J., "Reduction of Stiffness...official policy or position of the Department of Defense or the U.S. Government. 12a. DISTRIBUTION/AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE

  9. Low Frequency Predictive Skill Despite Structural Instability and Model Error

    DTIC Science & Technology

    2014-09-30

    Majda, based on earlier theoretical work. 1. Dynamic Stochastic Superresolution of sparseley observed turbulent systems M. Branicki (Post doc...of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by...resolving subgridscale turbulence through Dynamic Stochastic Superresolution utilizing aliased grids is a potential breakthrough for practical online

  10. Mid-Range Spatial Frequency Errors in Optical Components.

    DTIC Science & Technology

    1983-01-01

    pattern. Malacara (1978, pp. 356-359) describes the diffraction intensity distri- bution on either side of the focal plane and presents a diagram of the...Leoble and Co., Ltd., Aug. 1963. Kintner, Eric C., and Richard M. Sillitto. "A New Analytic Method for Computing the Optical Transfer Function." OPTICA ...2, 1976. Malacara , Daniel (ed). Optical Shop Testing. New York: John Wiley and Sons, 1978. Reticon Corporation. Reticon G Series Data Sheet. Sunnyvale, CA: Reticon, 1976. 41 FILMED 9-85 DTIC

  11. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  12. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  13. New Gear Transmission Error Measurement System Designed

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.

    2001-01-01

    The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.

  14. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  15. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  16. Detecting Errors in Programs

    DTIC Science & Technology

    1979-02-01

    unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 DETECTING ERRORS IN PROGRAMS* Lloyd D. Fosdick...from a finite set of tests [35,36]a Recently Howden [37] presented a result showing that for a particular class of Lindenmayer grammars it was possible...Diego, CA. 37o Howden, W.E.: Lindenmayer grammars and symbolic testing. Information Processing Letters 7,1 (Jano 1978), 36-39. 38~ Fitzsimmons, Ann

  17. Model reference adaptive control with an augmented error signal

    NASA Technical Reports Server (NTRS)

    Monopoli, R. V.

    1974-01-01

    It is shown how globally stable model reference adaptive control systems may be designed when one has access to only the plant's input and output signals. Controllers for single input-single output, nonlinear, nonautonomous plants are developed based on Lyapunov's direct method and the Meyer-Kalman-Yacubovich lemma. Derivatives of the plant output are not required, but are replaced by filtered derivative signals. An augmented error signal replaces the error normally used, which is defined as the difference between the model and plant outputs. However, global stability is assured in the sense that the normally used error signal approaches zero asymptotically.

  18. More On The Decoder-Error Probability Of Reed-Solomon Codes

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming

    1989-01-01

    Paper extends theory of decoder-error probability for linear maximum-distance separable (MDS) codes. General class of error-correcting codes includes Reed-Solomon codes, important in communications with distant spacecraft, military communications, and compact-disk recording industry. Advancing beyond previous theoretical developments that placed upper bounds on decoder-error probabilities, author derives an exact formula for probability PE(u) that decoder will make error when u code symbols in error.

  19. Reducing medication errors in critical care: a multimodal approach

    PubMed Central

    Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad

    2014-01-01

    The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478

  20. Main error factors, affecting inversion of EM data

    NASA Astrophysics Data System (ADS)

    Zuev, M. A.; Magomedov, M.; Korneev, V. A.; Goloshubin, G.; Zuev, J.; Brovman, Y.

    2013-12-01

    Inversions of EM data are complicated by a number of factors that need to be taken into account. These factors might contribute by tens of percents in data values, concealing responses from target objects, which usually contribute at the level of few percents only. We developed the exact analytical solutions of the EM wave equations that properly incorporate the contributions of the following effects: 1) A finite source size effect, where conventional dipole (zero-size) approximation brings 10-40% error compare to a real size source, needed to provide adequate signal-to-noise ratio. 2) Complex topography. A three-parametrical approach allows to keep the data misfits in 0.5% corridor while topography effect might be up to 40%. 3) Grounding shadow effect, caused by return ground currents, when Tx-line vicinity is horizontally non-uniform. By keeping survey setup within some reasonable geometrical ratios, the shadow effect comes to just one frequency-independent coefficient, which can be excluded from processing by using logarithmical derivatives. 4) Layer's wide spectral range effect. This brings to multi-layer spectral overlapping, so each frequency is affected by many layers; that requires wide spectral range processing, making the typical 'few-frequency data acquisition' non-reliable. 5) Horizontal sensitivity effect. The typical view at the target signal, reflected from a Tx-Rx mid-point is valid only for a ray approximation, reliable in a far-field zone. Unlike this, the real EM surveys usually work in near-field zone. Thus Tx-Rx mid-point does not represent the layer, so a sensitivity distribution function must be computed for each layer for the following 3D-unification process. 6) Wide range Rx-directions from mid-line Tx. Survey terrain often prevents placing Rx perpendicular to Tx-line, and even small deviations without proper corrections cause a significant inaccuracy. A radical simplification of the effect's description becomes possible after applying a

  1. Acoustic properties of pistonphones at low frequencies in the presence of pressure leakage and heat conduction

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; He, Wen; He, Longbiao; Rong, Zuochao

    2015-12-01

    The wide concern on absolute pressure calibration of acoustic transducers at low frequencies prompts the development of the pistonphone method. At low frequencies, the acoustic properties of pistonphones are governed by the pressure leakage and the heat conduction effects. However, the traditional theory for these two effects applies a linear superposition of two independent correction models, which differs somewhat from their coupled effect at low frequencies. In this paper, acoustic properties of pistonphones at low frequencies in full consideration of the pressure leakage and heat conduction effects have been quantitatively studied, and the explicit expression for the generated sound pressure has been derived. With more practical significance, a coupled correction expression for these two effects of pistonphones has been derived. In allusion to two typical pistonphones, the NPL pistonphone and our developed infrasonic pistonphone, comparisons were done for the coupled correction expression and the traditional one, whose results reveal that the traditional one produces maximum insufficient errors of about 0.1 dB above the lower limiting frequencies of two pistonphones, while at lower frequencies, excessive correction errors with an explicit limit of about 3 dB are produced by the traditional expression. The coupled correction expression should be adopted in the absolute pressure calibration of acoustic transducers at low frequencies. Furthermore, it is found that the heat conduction effect takes a limiting deviation of about 3 dB for the pressure amplitude and a small phase difference as frequency decreases, while the pressure leakage effect remarkably drives the pressure amplitude to attenuate and the phase difference tends to be 90° as the frequency decreases. The pressure leakage effect plays a more important role on the low frequency property of pistonphones.

  2. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  3. Frequency Combs

    NASA Astrophysics Data System (ADS)

    Hänsch, Theodor W.; Picqué, Nathalie

    Much of modern research in the field of atomic, molecular, and optical science relies on lasers, which were invented some 50 years ago and perfected in five decades of intense research and development. Today, lasers and photonic technologies impact most fields of science and they have become indispensible in our daily lives. Laser frequency combs were conceived a decade ago as tools for the precision spectroscopy of atomic hydrogen. Through the development of optical frequency comb techniques, technique a setup of the size 1 ×1 m2, good for precision measurements of any frequency, and even commercially available, has replaced the elaborate previous frequency-chain schemes for optical frequency measurements, which only worked for selected frequencies. A true revolution in optical frequency measurements has occurred, paving the way for the creation of all-optical clocks clock with a precision that might approach 10-18. A decade later, frequency combs are now common equipment in all frequency metrology-oriented laboratories. They are also becoming enabling tools for an increasing number of applications, from the calibration of astronomical spectrographs to molecular spectroscopy. This chapter first describes the principle of an optical frequency comb synthesizer. Some of the key technologies to generate such a frequency comb are then presented. Finally, a non-exhaustive overview of the growing applications is given.

  4. Errors in CT colonography.

    PubMed

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  5. Inborn Errors in Immunity

    PubMed Central

    Lionakis, M.S.; Hajishengallis, G.

    2015-01-01

    In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229

  6. Error Analysis in Mathematics Education.

    ERIC Educational Resources Information Center

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  7. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  8. Prospective issues for error detection.

    PubMed

    Blavier, Adélaïde; Rouy, Emmanuelle; Nyssen, Anne-Sophie; de Keyser, Véronique

    2005-06-10

    From the literature on error detection, the authors select several concepts relating error detection mechanisms and prospective memory features. They emphasize the central role of intention in the classification of the errors into slips/lapses/mistakes, in the error handling process and in the usual distinction between action-based and outcome-based detection. Intention is again a core concept in their investigation of prospective memory theory, where they point out the contribution of intention retrievals, intention persistence and output monitoring in the individual's possibilities for detecting their errors. The involvement of the frontal lobes in prospective memory and in error detection is also analysed. From the chronology of a prospective memory task, the authors finally suggest a model for error detection also accounting for neural mechanisms highlighted by studies on error-related brain activity.

  9. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  10. Error-related electrocorticographic activity in humans during continuous movements

    NASA Astrophysics Data System (ADS)

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2012-04-01

    Brain-machine interface (BMI) devices make errors in decoding. Detecting these errors online from neuronal activity can improve BMI performance by modifying the decoding algorithm and by correcting the errors made. Here, we study the neuronal correlates of two different types of errors which can both be employed in BMI: (i) the execution error, due to inaccurate decoding of the subjects’ movement intention; (ii) the outcome error, due to not achieving the goal of the movement. We demonstrate that, in electrocorticographic (ECoG) recordings from the surface of the human brain, strong error-related neural responses (ERNRs) for both types of errors can be observed. ERNRs were present in the low and high frequency components of the ECoG signals, with both signal components carrying partially independent information. Moreover, the observed ERNRs can be used to discriminate between error types, with high accuracy (≥83%) obtained already from single electrode signals. We found ERNRs in multiple cortical areas, including motor and somatosensory cortex. As the motor cortex is the primary target area for recording control signals for a BMI, an adaptive motor BMI utilizing these error signals may not require additional electrode implants in other brain areas.

  11. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  12. Measurement Error. For Good Measure....

    ERIC Educational Resources Information Center

    Johnson, Stephen; Dulaney, Chuck; Banks, Karen

    No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…

  13. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  14. Feature Referenced Error Correction Apparatus.

    DTIC Science & Technology

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  15. Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2008-06-09

    The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors.

  16. Attitude control with realization of linear error dynamics

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Bach, Ralph E.

    1993-01-01

    An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.

  17. On typographical errors.

    PubMed

    Hamilton, J W

    1993-09-01

    In his overall assessment of parapraxes in 1901, Freud included typographical mistakes but did not elaborate on or study this subject nor did he have anything to say about it in his later writings. This paper lists textual errors from a variety of current literary sources and explores the dynamic importance of their execution and the failure to make necessary corrections during the editorial process. While there has been a deemphasis of the role of unconscious determinants in the genesis of all slips as a result of recent findings in cognitive psychology, the examples offered suggest that, with respect to motivation, lapses in compulsivity contribute to their original commission while thematic compliance and voyeuristic issues are important in their not being discovered prior to publication.

  18. Method and apparatus for reducing quantization error in laser gyro test data through high speed filtering

    SciTech Connect

    Mark, J.G.; Brown, A.K.; Matthews, A.

    1987-01-06

    A method is described for processing ring laser gyroscope test data comprising the steps of: (a) accumulating the data over a preselected sample period; and (b) filtering the data at a predetermined frequency so that non-time dependent errors are reduced by a substantially greater amount than are time dependent errors; then (c) analyzing the random walk error of the filtered data.

  19. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    ERIC Educational Resources Information Center

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  20. A comprehensive analysis of translational missense errors in the yeast Saccharomyces cerevisiae.

    PubMed

    Kramer, Emily B; Vallabhaneni, Haritha; Mayer, Lauren M; Farabaugh, Philip J

    2010-09-01

    The process of protein synthesis must be sufficiently rapid and sufficiently accurate to support continued cellular growth. Failure in speed or accuracy can have dire consequences, including disease in humans. Most estimates of the accuracy come from studies of bacterial systems, principally Escherichia coli, and have involved incomplete analysis of possible errors. We recently used a highly quantitative system to measure the frequency of all types of misreading errors by a single tRNA in E. coli. That study found a wide variation in error frequencies among codons; a major factor causing that variation is competition between the correct (cognate) and incorrect (near-cognate) aminoacyl-tRNAs for the mutant codon. Here we extend that analysis to measure the frequency of missense errors by two tRNAs in a eukaryote, the yeast Saccharomyces cerevisiae. The data show that in yeast errors vary by codon from a low of 4 x 10(-5) to a high of 6.9 x 10(-4) per codon and that error frequency is in general about threefold lower than in E. coli, which may suggest that yeast has additional mechanisms that reduce missense errors. Error rate again is strongly influenced by tRNA competition. Surprisingly, missense errors involving wobble position mispairing were much less frequent in S. cerevisiae than in E. coli. Furthermore, the error-inducing aminoglycoside antibiotic, paromomycin, which stimulates errors on all error-prone codons in E. coli, has a more codon-specific effect in yeast.

  1. Phonologic Error Distributions in the Iowa-Nebraska Articulation Norms Project: Word-Initial Consonant Clusters.

    ERIC Educational Resources Information Center

    Smit, Ann Bosma

    1993-01-01

    The errors on word-initial consonant clusters made by children (ages 2-9) in the Iowa-Nebraska Articulation Norms Project were tabulated by age range and frequency. Error data showed support for previous research in the acquisition of clusters. Cluster errors are discussed in terms of theories of phonologic development. (Author/JDD)

  2. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  3. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  4. New time-domain three-point error separation methods for measurement roundness and spindle error motion

    NASA Astrophysics Data System (ADS)

    Liu, Wenwen; Tao, Tingting; Zeng, Hao

    2016-10-01

    Error separation is a key technology for online measuring spindle radial error motion or artifact form error, such as roundness and cylindricity. Three time-domain three-point error separation methods are proposed based on solving the minimum norm solution of the linear equations. Three laser displacement sensors are used to collect a set of discrete measurements recorded, by which a group of linear measurement equations is derived according to the criterion of prior separation form (PSF), prior separation spindle error motion (PSM) or synchronous separation both form and spindle error motion (SSFM). The work discussed the correlations between the angles of three sensors in measuring system, rank of coefficient matrix in the measurement equations and harmonics distortions in the separation results, revealed the regularities of the first order harmonics distortion and recommended the applicable situation of the each method. Theoretical research and large simulations show that SSFM is the more precision method because of the lower distortion.

  5. Error analysis in the measurement of average power with application to switching controllers

    NASA Technical Reports Server (NTRS)

    Maisel, J. E.

    1979-01-01

    The behavior of the power measurement error due to the frequency responses of first order transfer functions between the input sinusoidal voltage, input sinusoidal current and the signal multiplier was studied. It was concluded that this measurement error can be minimized if the frequency responses of the first order transfer functions are identical.

  6. MIMO free-space optical communication employing coherent BPOLSK modulation in atmospheric optical turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Prabu, K.; Kumar, D. Sriram

    2015-05-01

    An optical wireless communication system is an alternative to radio frequency communication, but atmospheric turbulence induced fading and misalignment fading are the main impairments affecting an optical signal when propagating through the turbulence channel. The resultant of misalignment fading is the pointing errors, it degrades the bit error rate (BER) performance of the free space optics (FSO) system. In this paper, we study the BER performance of the multiple-input multiple-output (MIMO) FSO system employing coherent binary polarization shift keying (BPOLSK) in gamma-gamma (G-G) channel with pointing errors. The BER performance of the BPOLSK based MIMO FSO system is compared with the single-input single-output (SISO) system. Also, the average BER performance of the systems is analyzed and compared with and without pointing errors. A novel closed form expressions of BER are derived for MIMO FSO system with maximal ratio combining (MRC) and equal gain combining (EGC) diversity techniques. The analytical results show that the pointing errors can severely degrade the performance of the system.

  7. Towards error-free interaction.

    PubMed

    Tsoneva, Tsvetomira; Bieger, Jordi; Garcia-Molina, Gary

    2010-01-01

    Human-machine interaction (HMI) relies on pat- tern recognition algorithms that are not perfect. To improve the performance and usability of these systems we can utilize the neural mechanisms in the human brain dealing with error awareness. This study aims at designing a practical error detection algorithm using electroencephalogram signals that can be integrated in an HMI system. Thus, real-time operation, customization, and operation convenience are important. We address these requirements in an experimental framework simulating machine errors. Our results confirm the presence of brain potentials related to processing of machine errors. These are used to implement an error detection algorithm emphasizing the differences in error processing on a per subject basis. The proposed algorithm uses the individual best bipolar combination of electrode sites and requires short calibration. The single-trial error detection performance on six subjects, characterized by the area under the ROC curve ranges from 0.75 to 0.98.

  8. Frequency spectrum analyzer with phase-lock

    DOEpatents

    Boland, Thomas J.

    1984-01-01

    A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.

  9. Developing an Error Model for Ionospheric Phase Distortions in L-Band SAR and InSAR Data

    NASA Astrophysics Data System (ADS)

    Meyer, F. J.; Agram, P. S.

    2014-12-01

    Many of the recent and upcoming spaceborne SAR systems are operating in the L-band frequency range. The choice of L-band has a number of advantages especially for InSAR applications. These include deeper penetration into vegetation, higher coherence, and higher sensitivity to soil moisture. While L-band SARs are undoubtedly beneficial for a number of earth science disciplines, their signals are susceptive to path delay effects in the ionosphere. Many recent publications indicate that the ionosphere can have detrimental effects on InSAR coherence and phase. It has also been shown that the magnitude of these effects strongly depends on the time of day and geographic location of the image acquisition as well as on the coincident solar activity. Hence, in order to provide realistic error estimates for geodetic measurements derived from L-band InSAR, an error model needs to be developed that is capable of describing ionospheric noise. With this paper, we present a global ionospheric error model that is currently being developed in support of NASA's future L-band SAR mission NISAR. The system is based on a combination of empirical data analysis and modeling input from the ionospheric model WBMOD, and is capable of predicting ionosphere-induced phase noise as a function of space and time. The error model parameterizes ionospheric noise using a power spectrum model and provides the parameters of this model in a global 1x1 degree raster. From the power law model, ionospheric errors in deformation estimates can be calculated. In Polar Regions, our error model relies on a statistical analysis of ionospheric-phase noise in a large number of SAR data from previous L-band SAR missions such as ALOS PALSAR and JERS-1. The focus on empirical analyses is due to limitations of WBMOD in high latitude areas. Outside of the Polar Regions, the ionospheric model WBMOD is used to derive ionospheric structure parameters for as a function of solar activity. The structure parameters are

  10. Sub-nanometer periodic nonlinearity error in absolute distance interferometers.

    PubMed

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  11. Computerised physician order entry-related medication errors: analysis of reported errors and vulnerability testing of current systems

    PubMed Central

    Schiff, G D; Amato, M G; Eguale, T; Boehne, J J; Wright, A; Koppel, R; Rashidee, A H; Elson, R B; Whitney, D L; Thach, T-T; Bates, D W; Seger, A C

    2015-01-01

    Importance Medication computerised provider order entry (CPOE) has been shown to decrease errors and is being widely adopted. However, CPOE also has potential for introducing or contributing to errors. Objectives The objectives of this study are to (a) analyse medication error reports where CPOE was reported as a ‘contributing cause’ and (b) develop ‘use cases’ based on these reports to test vulnerability of current CPOE systems to these errors. Methods A review of medication errors reported to United States Pharmacopeia MEDMARX reporting system was made, and a taxonomy was developed for CPOE-related errors. For each error we evaluated what went wrong and why and identified potential prevention strategies and recurring error scenarios. These scenarios were then used to test vulnerability of leading CPOE systems, asking typical users to enter these erroneous orders to assess the degree to which these problematic orders could be entered. Results Between 2003 and 2010, 1.04 million medication errors were reported to MEDMARX, of which 63 040 were reported as CPOE related. A review of 10 060 CPOE-related cases was used to derive 101 codes describing what went wrong, 67 codes describing reasons why errors occurred, 73 codes describing potential prevention strategies and 21 codes describing recurring error scenarios. Ability to enter these erroneous order scenarios was tested on 13 CPOE systems at 16 sites. Overall, 298 (79.5%) of the erroneous orders were able to be entered including 100 (28.0%) being ‘easily’ placed, another 101 (28.3%) with only minor workarounds and no warnings. Conclusions and relevance Medication error reports provide valuable information for understanding CPOE-related errors. Reports were useful for developing taxonomy and identifying recurring errors to which current CPOE systems are vulnerable. Enhanced monitoring, reporting and testing of CPOE systems are important to improve CPOE safety. PMID:25595599

  12. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  13. Improved Error Thresholds for Measurement-Free Error Correction.

    PubMed

    Crow, Daniel; Joynt, Robert; Saffman, M

    2016-09-23

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  14. Dissociation of inflectional and derivational morphology.

    PubMed

    Miceli, G; Caramazza, A

    1988-09-01

    A patient is described who makes morphological errors in spontaneous sentence production and in repetition of single words. The great majority of these errors were substitutions of inflectional affixes. The patient did make some derivational errors in repeating derived words but almost never made such errors for nonderived words. The inflectional errors for adjectives and nouns occurred mostly on the plural forms for nouns and adjectives and on the feminine form for adjectives. For verbs, inflectional errors were produced for all tense, aspect, and mood forms. There were no indications that these latter verb features constrained the form of inflectional errors produced. The results are interpreted as support for the thesis that morphological processes are located in the lexicon but that inflectional and derivational processes constitute autonomous subcomponents of the lexicon.

  15. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  16. Frequency-Wavenumber Spectrum for GATF, Phase I Rainfields.

    NASA Astrophysics Data System (ADS)

    Nakamoto, Shoichiro; Valdés, Juan B.; North, Gerald R.

    1990-09-01

    The oceanic rainfall frequency-wavenumber spectrum and its associated space-time correlation have been evaluated from subsets of GATE Phase 1 data. The records, of a duration of 4 days, were sampled at 15 minute intervals in 4 × 4 km grid boxes ova a 400 km diameter hexagon.In the low frequencies-low wavenumber region the results coincide with those obtained by using the stochastic model proposed by North and Nakamoto. From the derived spectrum the inherent time and space scales of the stochastic model were determined to be approximately 13 hours and 36 km. The space-time correlation function evaluated from the function-wavenumber spectrum and that obtained directly from GATE Phase I records agreed.The formalism proposed by North and Nakamoto was taken together with the derived spectrum to compute the mean square sampling error due to intermittent visits of a spaceborne sensor. The sampling error was estimated to be on the order of 10%, for monthly mean rainfall averaged over 500 × 500 km boxes which meets the scientific requirements of the TRMM mission. This result is consistent with those previously reported in the literature.

  17. Correction of single frequency altimeter measurements for ionosphere delay

    SciTech Connect

    Schreiner, W.S.; Markin, R.E.; Born, G.H.

    1997-03-01

    Satellite altimetry has become a very powerful tool for the study of ocean circulation and variability and provides data for understanding important issues related to climate and global change. This study is a preliminary analysis of the accuracy of various ionosphere models to correct single frequency altimeter height measurements for ionospheric path delay. In particular, research focused on adjusting empirical and parameterized ionosphere models in the parameterized real-time ionospheric specification model (PRISM) 1.2 using total electron content (TEC) data from the global positioning system (GPS). The types of GPS data used to adjust PRISM included GPS line-of-sight (LOS) TEC data mapped to the vertical, and a grid of GPS derived TEC data in a sun-fixed longitude frame. The adjusted PRISM TEC values, as well as predictions by IRI-90, a climatological model, were compared to TOPEX/Poseidon (T/P) TEC measurements form the dual-frequency altimeter for a number of T/P tracks. When adjusted with GPS LOS data, the PRISM empirical model predicted TEC over 24 1 h data sets for a given local time to within a global error of 8.60 TECU rms during a midnight centered ionosphere and 9.74 TECU rms during a noon centered ionosphere. Using GPS derived sun-fixed TEC data, the PRISM parameterized model predicted TEC within an error of 8.47 TECU rms centered at midnight and 12.83 TECU rms centered at noon.

  18. Comparison of analytical error and sampling error for contaminated soil.

    PubMed

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  19. Effect of counting errors on immunoassay precision

    SciTech Connect

    Klee, G.G.; Post, G. )

    1989-07-01

    Using mathematical analysis and computer simulation, we studied the effect of gamma scintillation counting error on two radioimmunoassays (RIAs) and an immunoradiometric assay (IRMA). To analyze the propagation of the counting errors into the estimation of analyte concentration, we empirically derived parameters for logit-log data-reduction models for assays of digoxin and triiodothyronine (RIAs) and ferritin (IRMA). The component of the analytical error attributable to counting variability, when expressed as a CV of the analyte concentration, decreased approximately linearly with the inverse of the square root of the maximum counts bound. Larger counting-error CVs were found at lower concentrations for both RIAs and the IRMA. Substantially smaller CVs for overall assay were found when the maximum counts bound progressively increased from 500 to 10,000 counts, but further increases in maximum bound counts resulted in little decrease in overall assay CV except when very low concentrations of analyte were being measured. Therefore, RIA and IRMA systems based in duplicate determinations having at least 10,000 maximum counts bound should have adequate precision, except possibly at very low concentrations.

  20. Skills, rules and knowledge in aircraft maintenance: errors in context

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2002-01-01

    Automatic or skill-based behaviour is generally considered to be less prone to error than behaviour directed by conscious control. However, researchers who have applied Rasmussen's skill-rule-knowledge human error framework to accidents and incidents have sometimes found that skill-based errors appear in significant numbers. It is proposed that this is largely a reflection of the opportunities for error which workplaces present and does not indicate that skill-based behaviour is intrinsically unreliable. In the current study, 99 errors reported by 72 aircraft mechanics were examined in the light of a task analysis based on observations of the work of 25 aircraft mechanics. The task analysis identified the opportunities for error presented at various stages of maintenance work packages and by the job as a whole. Once the frequency of each error type was normalized in terms of the opportunities for error, it became apparent that skill-based performance is more reliable than rule-based performance, which is in turn more reliable than knowledge-based performance. The results reinforce the belief that industrial safety interventions designed to reduce errors would best be directed at those aspects of jobs that involve rule- and knowledge-based performance.

  1. Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications.

    PubMed

    Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat

    2014-04-01

    Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.

  2. Handling the satellite inter-frequency biases in triple-frequency observations

    NASA Astrophysics Data System (ADS)

    Zhao, Lewen; Ye, Shirong; Song, Jia

    2017-04-01

    The new generation of GNSS satellites, including BDS, Galileo, modernized GPS, and GLONASS, transmit navigation sdata at more frequencies. Multi-frequency signals open new prospects for precise positioning, but satellite code and phase inter-frequency biases (IFB) induced by the third frequency need to be handled. Satellite code IFB can be corrected using products estimated by different strategies, the theoretical and numerical compatibility of these methods need to be proved. Furthermore, a new type of phase IFB, which changes with the relative sun-spacecraft-earth geometry, has been observed. It is necessary to investigate the cause and possible impacts of phase Time-variant IFB (TIFB). Therefore, we present systematic analysis to illustrate the relevancy between satellite clocks and phase TIFB, and compare the handling strategies of the code and phase IFB in triple-frequency positioning. First, the un-differenced L1/L2 satellite clock corrections considering the hardware delays are derived. And IFB induced by the dual-frequency satellite clocks to triple-frequency PPP model is detailed. The analysis shows that estimated satellite clocks actually contain the time-variant phase hardware delays, which can be compensated in L1/L2 ionosphere-free combinations. However, the time-variant hardware delays will lead to TIFB if the third frequency is used. Then, the methods used to correct the code and phase IFB are discussed. Standard point positioning (SPP) and precise point positioning (PPP) using BDS observations are carried out to validate the improvement of different IFB correction strategies. Experiments show that code IFB derived from DCB or geometry-free and ionosphere-free combination show an agreement of 0.3 ns for all satellites. Positioning results and error distribution with two different code IFB correcting strategies achieve similar tendency, which shows their substitutability. The original and wavelet filtered phase TIFB long-term series show significant

  3. Implications of Three Causal Models for the Measurement of Halo Error.

    ERIC Educational Resources Information Center

    Fisicaro, Sebastiano A.; Lance, Charles E.

    1990-01-01

    Three conceptual definitions of halo error are reviewed in the context of causal models of halo error. A corrected correlational measurement of halo error is derived, and the traditional and corrected measures are compared empirically for a 1986 study of 52 undergraduate students' ratings of a lecturer's performance. (SLD)

  4. Error assessments of widely-used orbit error approximations in satellite altimetry

    NASA Technical Reports Server (NTRS)

    Tai, Chang-Kou

    1988-01-01

    From simulations, the orbit error can be assumed to be a slowly varying sine wave with a predominant wavelength comparable to the Earth's circumference. Thus, one can derive analytically the error committed in representing the orbit error along a segment of the satellite ground track by a bias; by a bias and tilt (linear approximation); or by a bias, tilt, and curvature (quadratic approximation). The result clearly agrees with what is obvious intuitively, i.e., (1) the fit is better with more parameters, and (2) as the length of the segment increases, the approximation gets worse. But more importantly, it provides a quantitative basis to evaluate the accuracy of past results and, in the future, to select the best approximation according to the required precision and the efficiency of various approximations.

  5. Influence of satellite geometry, range, clock, and altimeter errors on two-satellite GPS navigation

    NASA Astrophysics Data System (ADS)

    Bridges, Philip D.

    Flight tests were conducted at Yuma Proving Grounds, Yuma, AZ, to determine the performance of a navigation system capable of using only two GPS satellites. The effect of satellite geometry, range error, and altimeter error on the horizontal position solution were analyzed for time and altitude aided GPS navigation (two satellites + altimeter + clock). The east and north position errors were expressed as a function of satellite range error, altimeter error, and east and north Dilution of Precision. The equations for the Dilution of Precision were derived as a function of satellite azimuth and elevation angles for the two satellite case. The expressions for the position error were then used to analyze the flight test data. The results showed the correlation between satellite geometry and position error, the increase in range error due to clock drift, and the impact of range and altimeter error on the east and north position error.

  6. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  7. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  8. Effect of Image-Guidance Frequency on Geometric Accuracy and Setup Margins in Radiotherapy for Locally Advanced Lung Cancer

    SciTech Connect

    Higgins, Jane; Bezjak, Andrea; Hope, Andrew; Panzarella, Tony; Li, Winnie; Cho, John B.C.; Craig, Tim; Brade, Anthony; Sun, Alexander; Bissonnette, Jean-Pierre

    2011-08-01

    Purpose: To assess the relative effectiveness of five image-guidance (IG) frequencies on reducing patient positioning inaccuracies and setup margins for locally advanced lung cancer patients. Methods and Materials: Daily cone-beam computed tomography data for 100 patients (4,237 scans) were analyzed. Subsequently, four less-than-daily IG protocols were simulated using these data (no IG, first 5-day IG, weekly IG, and alternate-day IG). The frequency and magnitude of residual setup error were determined. The less-than-daily IG protocols were compared against the daily IG, the assumed reference standard. Finally, the population-based setup margins were calculated. Results: With the less-than-daily IG protocols, 20-43% of fractions incurred residual setup errors {>=}5 mm; daily IG reduced this to 6%. With the exception of the first 5-day IG, reductions in systematic error ({Sigma}) occurred as the imaging frequency increased and only daily IG provided notable random error ({sigma}) reductions ({Sigma} = 1.5-2.2 mm, {sigma} = 2.5-3.7 mm; {Sigma} = 1.8-2.6 mm, {sigma} = 2.5-3.7 mm; and {Sigma} = 0.7-1.0 mm, {sigma} = 1.7-2.0 mm for no IG, first 5-day IG, and daily IG, respectively. An overall significant difference in the mean setup error was present between the first 5-day IG and daily IG (p < .0001). The derived setup margins were 5-9 mm for less-than-daily IG and were 3-4 mm with daily IG. Conclusion: Daily cone-beam computed tomography substantially reduced the setup error and could permit setup margin reduction and lead to a reduction in normal tissue toxicity for patients undergoing conventionally fractionated lung radiotherapy. Using first 5-day cone-beam computed tomography was suboptimal for lung patients, given the inability to reduce the random error and the potential for the systematic error to increase throughout the treatment course.

  9. Phase and frequency coherency in fading environment in tracking and data relay satellites

    NASA Astrophysics Data System (ADS)

    Varanasi, S.; Gupta, S. C.

    Communication systems employing relay satellites require strict maintaining of frequency and phase coherence. In this paper, the tracking performance of a Digital Fading Phase Locked Loop (DFPLL) is presented considering that the transmitted signal passes through a Rician fading communication channel and also an Additive White Gaussian Noise (AWGN) channel. The stochastic difference equations governing the loop operation considering both phase-step and frequency-step inputs for a first order DFPLL is derived and also mathematical modeling of the Rician fading communication channel. Approximate analytic expressions for the steady state phase error process probability density function (pdf) and phase error variance, which characterize the tracking performance of the loop are presented by solving the corresponding Chapman-Kolmogorov (C-K) equations for both types of inputs considered. Numerical and simulation results are provided that confirm the analytical results for various signal to noise ratios and for various values of the fading parameter.

  10. Medication Errors in Cardiopulmonary Arrest and Code-Related Situations.

    PubMed

    Flannery, Alexander H; Parli, Sara E

    2016-01-01

    PubMed/MEDLINE (1966-November 2014) was searched to identify relevant published studies on the overall frequency, types, and examples of medication errors during medical emergencies involving cardiopulmonary resuscitation and related situations, and the breakdown by type of error. The overall frequency of medication errors during medical emergencies, specifically situations related to resuscitation, is highly variable. Medication errors during such emergencies, particularly cardiopulmonary resuscitation and surrounding events, are not well characterized in the literature but may be more frequent than previously thought. Depending on whether research methods included database mining, simulation, or prospective observation of clinical practice, reported occurrence of medication errors during cardiopulmonary resuscitation and surrounding events has ranged from less than 1% to 50%. Because of the chaos of the resuscitation environment, errors in prescribing, dosing, preparing, labeling, and administering drugs are prone to occur. System-based strategies, such as infusion pump policies and code cart management, as well as personal strategies exist to minimize medication errors during emergency situations.

  11. Processor register error correction management

    SciTech Connect

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  12. ISA accelerometer onboard the Mercury Planetary Orbiter: error budget

    NASA Astrophysics Data System (ADS)

    Iafolla, Valerio; Lucchesi, David M.; Nozzoli, Sergio; Santoli, Francesco

    2007-03-01

    We have estimated a preliminary error budget for the Italian Spring Accelerometer (ISA) that will be allocated onboard the Mercury Planetary Orbiter (MPO) of the European Space Agency (ESA) space mission to Mercury named BepiColombo. The role of the accelerometer is to remove from the list of unknowns the non-gravitational accelerations that perturb the gravitational trajectory followed by the MPO in the strong radiation environment that characterises the orbit of Mercury around the Sun. Such a role is of fundamental importance in the context of the very ambitious goals of the Radio Science Experiments (RSE) of the BepiColombo mission. We have subdivided the errors on the accelerometer measurements into two main families: (i) the pseudo-sinusoidal errors and (ii) the random errors. The former are characterised by a periodic behaviour with the frequency of the satellite mean anomaly and its higher order harmonic components, i.e., they are deterministic errors. The latter are characterised by an unknown frequency distribution and we assumed for them a noise-like spectrum, i.e., they are stochastic errors. Among the pseudo-sinusoidal errors, the main contribution is due to the effects of the gravity gradients and the inertial forces, while among the random-like errors the main disturbing effect is due to the MPO centre-of-mass displacements produced by the onboard High Gain Antenna (HGA) movements and by the fuel consumption and sloshing. Very subtle to be considered are also the random errors produced by the MPO attitude corrections necessary to guarantee the nadir pointing of the spacecraft. We have therefore formulated the ISA error budget and the requirements for the satellite in order to guarantee an orbit reconstruction for the MPO spacecraft with an along-track accuracy of about 1 m over the orbital period of the satellite around Mercury in such a way to satisfy the RSE requirements.

  13. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors

  14. Evaluating Spectral Signals to Identify Spectral Error

    PubMed Central

    Bazar, George; Kovacs, Zoltan; Tsenkova, Roumiana

    2016-01-01

    Since the precision and accuracy level of a chemometric model is highly influenced by the quality of the raw spectral data, it is very important to evaluate the recorded spectra and describe the erroneous regions before qualitative and quantitative analyses or detailed band assignment. This paper provides a collection of basic spectral analytical procedures and demonstrates their applicability in detecting errors of near infrared data. Evaluation methods based on standard deviation, coefficient of variation, mean centering and smoothing techniques are presented. Applications of derivatives with various gap sizes, even below the bandpass of the spectrometer, are shown to evaluate the level of spectral errors and find their origin. The possibility for prudent measurement of the third overtone region of water is also highlighted by evaluation of a complex data recorded with various spectrometers. PMID:26731541

  15. Error Analysis of Stochastic Gradient Descent Ranking.

    PubMed

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2012-12-31

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

  16. Error analysis of stochastic gradient descent ranking.

    PubMed

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2013-06-01

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

  17. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  18. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  19. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  20. Error-disturbance uncertainty relations studied in neutron optics

    NASA Astrophysics Data System (ADS)

    Sponar, Stephan; Sulyok, Georg; Demirel, Bulent; Hasegawa, Yuji

    2016-09-01

    Heisenberg's uncertainty principle is probably the most famous statement of quantum physics and its essential aspects are well described by a formulations in terms of standard deviations. However, a naive Heisenberg-type error-disturbance relation is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid Ozawa's relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg's original EDUR is violated, and the Ozawa's and Branciard's EDURs are valid in a wide range of experimental parameters, applying a new measurement procedure referred to as two-state method.

  1. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  2. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  3. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  4. Error Analysis and Propagation in Metabolomics Data Analysis.

    PubMed

    Moseley, Hunter N B

    2013-01-01

    Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.

  5. Overcoming time-integration errors in SINDA's FWDBCK solution routine

    NASA Technical Reports Server (NTRS)

    Skladany, J. T.; Costello, F. A.

    1984-01-01

    The FWDBCK time step, which is usually chosen intuitively to achieve adequate accuracy at reasonable computational costs, can in fact lead to large errors. NASA observed such errors in solving cryogenic problems on the COBE spacecraft, but a similar error is also demonstrated for a single node radiating to space. An algorithm has been developed for selecting the time step during the course of the simulation. The error incurred when the time derivative is replaced by the FWDBCK time difference can be estimated from the Taylor-Series expression for the temperature. The algorithm selects the time step to keep this error small. The efficacy of the method is demonstrated on the COBE and single-node problems.

  6. Error Estimation for Reduced Order Models of Dynamical Systems

    SciTech Connect

    Homescu, C; Petzold, L; Serban, R

    2004-01-22

    The use of reduced order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors, by a combination of small sample statistical condition estimation and error estimation using the adjoint method. Most importantly, the proposed approach allows the assessment of regions of validity for reduced models, i.e., ranges of perturbations in the original system over which the reduced model is still appropriate. Numerical examples validate our approach: the error norm estimates approximate well the forward error while the derived bounds are within an order of magnitude.

  7. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  8. Spectral analysis of oscillation instabilities in frequency standards

    NASA Technical Reports Server (NTRS)

    Lippincott, S.

    1970-01-01

    Phase and frequency fluctuations, inherent in oscillators used as frequency standards, are measured over spectral frequency range of 1 Hz to 5 kHz. Basic measurement system consists of electromechanical phase-locked loop that extracts phase and frequency fluctuations and error multiplier that extends threshold sensitivity.

  9. Influence of indexing errors on dynamic response of spur gear pairs

    NASA Astrophysics Data System (ADS)

    Inalpolat, M.; Handschuh, M.; Kahraman, A.

    2015-08-01

    In this study, a dynamic model of a spur gear pair is employed to investigate the influence of gear tooth indexing errors on the dynamic response. This transverse-torsional dynamic model includes periodically-time varying gear mesh stiffness and nonlinearities caused by tooth separations in resonance regions. With quasi-static transmission error time traces as the primary excitation, the model predicts frequency-domain dynamic mesh force and dynamic transmission error spectra. These long-period quasi-static transmission error time traces are measured using unity-ratio spur gear pairs having certain intentional indexing errors. A special test setup with dedicated instrumentation for the measurement of quasi-static transmission error is employed to perform a number of experiments with gears having deterministic spacing errors at one or two teeth of the test gear only and random spacing errors where all of the test gear teeth have a random distribution of errors as in a typical production gear.

  10. Error propagation in first-principles kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Döpking, Sandra; Matera, Sebastian

    2017-04-01

    First-principles kinetic Monte Carlo models allow for the modeling of catalytic surfaces with predictive quality. This comes at the price of non-negligible errors induced by the underlying approximate density functional calculation. On the example of CO oxidation on RuO2(110), we demonstrate a novel, efficient approach to global sensitivity analysis, with which we address the error propagation in these multiscale models. We find, that we can still derive the most important atomistic factors for reactivity, albeit the errors in the simulation results are sizable. The presented approach might also be applied in the hierarchical model construction or computational catalyst screening.

  11. Low-magnitude, high-frequency vibration promotes the adhesion and the osteogenic differentiation of bone marrow-derived mesenchymal stem cells cultured on a hydroxyapatite-coated surface: The direct role of Wnt/β-catenin signaling pathway activation.

    PubMed

    Chen, Bailing; Lin, Tao; Yang, Xiaoxi; Li, Yiqiang; Xie, Denghui; Zheng, Wenhui; Cui, Haowen; Deng, Weimin; Tan, Xin

    2016-11-01

    The positive effect of low-magnitude, high‑frequency (LMHF) vibration on implant osseointegration has been demonstrated; however, the underlying cellular and molecular mechanisms remain unknown. The aim of this study was to explore the effect of LMHF vibration on the adhesion and the osteogenic differentiation of bone marrow-derived mesenchymal stem cells (BMSCs) cultured on hydroxyapatite (HA)-coated surfaces in an in vitro model as well as to elucidate the molecular mechanism responsible for the effects of LMHF vibration on osteogenesis. LMHF vibration resulted in the increased expression of fibronectin, which was measured by immunostaining and RT-qPCR. Stimulation of BMSCs by LMHF vibration resulted in the rearrangement of the actin cytoskeleton with more prominent F-actin. Moreover, the expression of β1 integrin, vinculin and paxillin was notably increased following LMHF stimulation. Scanning electron microscope observations revealed that there were higher cell numbers and more extracellular matrix attached to the HA-coated surface in the LMHF group. Alkaline phosphatase activity as well as the expression of osteogenic-specific genes, namely Runx2, osterix, collagen I and osteocalcin, were significantly elevated in the LMHF group. In addition, the protein expression of Wnt10B, β-catenin, Runx2 and osterix was increased following exposure to LMHF vibration. Taken together, the findings of this study indicate that LMHF vibration promotes the adhesion and the osteogenic differentiation of BMSCs on HA-coated surfaces in vitro, and LMHF vibration may directly induce osteogenesis by activating the Wnt/β‑catenin signaling pathway. These data suggest that LMHF vibration enhances the osseointegration of bone to a HA-coated implant, and provide a scientific foundation for improving bone-implant osseointegration through the application of LMHF vibration.

  12. Higher frequencies of GARP(+)CTLA-4(+)Foxp3(+) T regulatory cells and myeloid-derived suppressor cells in hepatocellular carcinoma patients are associated with impaired T-cell functionality.

    PubMed

    Kalathil, Suresh; Lugade, Amit A; Miller, Austin; Iyer, Renuka; Thanavala, Yasmin

    2013-04-15

    The extent to which T-cell-mediated immune surveillance is impaired in human cancer remains a question of major importance, given its potential impact on the development of generalized treatments of advanced disease where the highest degree of heterogeneity exists. Here, we report the first global analysis of immune dysfunction in patients with advanced hepatocellular carcinoma (HCC). Using multi-parameter fluorescence-activated cell sorting analysis, we quantified the cumulative frequency of regulatory T cells (Treg), exhausted CD4(+) helper T cells, and myeloid-derived suppressor cells (MDSC) to gain concurrent views on the overall level of immune dysfunction in these inoperable patients. We documented augmented numbers of Tregs, MDSC, PD-1(+)-exhausted T cells, and increased levels of immunosuppressive cytokines in patients with HCC, compared with normal controls, revealing a network of potential mechanisms of immune dysregulation in patients with HCC. In dampening T-cell-mediated antitumor immunity, we hypothesized that these processes may facilitate HCC progression and thwart the efficacy of immunotherapeutic interventions. In testing this hypothesis, we showed that combined regimens to deplete Tregs, MDSC, and PD-1(+) T cells in patients with advanced HCC restored production of granzyme B by CD8(+) T cells, reaching levels observed in normal controls and also modestly increased the number of IFN-γ producing CD4(+) T cells. These clinical findings encourage efforts to restore T-cell function in patients with advanced stage disease by highlighting combined approaches to deplete endogenous suppressor cell populations that can also expand effector T-cell populations.

  13. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion

    PubMed Central

    Ricci, Luca; Taffoni, Fabrizio

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application. PMID:27612100

  14. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion.

    PubMed

    Ricci, Luca; Taffoni, Fabrizio; Formica, Domenico

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application.

  15. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  16. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  17. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  18. Error Estimates for Mixed Methods.

    DTIC Science & Technology

    1979-03-01

    This paper presents abstract error estimates for mixed methods for the approximate solution of elliptic boundary value problems. These estimates are...then applied to obtain quasi-optimal error estimates in the usual Sobolev norms for four examples: three mixed methods for the biharmonic problem and a mixed method for 2nd order elliptic problems. (Author)

  19. Error Correction, Revision, and Learning

    ERIC Educational Resources Information Center

    Truscott, John; Hsu, Angela Yi-ping

    2008-01-01

    Previous research has shown that corrective feedback on an assignment helps learners reduce their errors on that assignment during the revision process. Does this finding constitute evidence that learning resulted from the feedback? Differing answers play an important role in the ongoing debate over the effectiveness of error correction,…

  20. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  1. Twenty questions about student errors

    NASA Astrophysics Data System (ADS)

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    Errors in science learning (errors in expression of organized, purposeful thought within the domain of science) provide a window through which glimpses of mental functioning can be obtained. Errors are valuable and normal occurrences in the process of learning science. A student can use his/her errors to develop a deeper understanding of a concept as long as the error can be recognized and appropriate, informative feedback can be obtained. A safe, non-threatening, and nonpunitive environment which encourages dialogue helps students to express their conceptions and to risk making errors. Pedagogical methods that systematically address common student errors produce significant gains in student learning. Just as the nature-nurture interaction is integral to the development of living things, so the individual-environment interaction is basic to thought processes. At a minimum, four systems interact: (1) the individual problem solver (who has a worldview, relatively stable cognitive characteristics, relatively malleable mental states and conditions, and aims or intentions), (2) task to be performed (including relative importance and nature of the task), (3) knowledge domain in which task is contained, and (4) the environment (including orienting conditions and the social and physical context).Several basic assumptions underlie research on errors and alternative conceptions. Among these are: Knowledge and thought involve active, constructive processes; there are many ways to acquire, organize, store, retrieve, and think about a given concept or event; and understanding is achieved by successive approximations. Application of these ideas will require a fundamental change in how science is taught.

  2. Sources of error in the retracted scientific literature.

    PubMed

    Casadevall, Arturo; Steen, R Grant; Fang, Ferric C

    2014-09-01

    Retraction of flawed articles is an important mechanism for correction of the scientific literature. We recently reported that the majority of retractions are associated with scientific misconduct. In the current study, we focused on the subset of retractions for which no misconduct was identified, in order to identify the major causes of error. Analysis of the retraction notices for 423 articles indexed in PubMed revealed that the most common causes of error-related retraction are laboratory errors, analytical errors, and irreproducible results. The most common laboratory errors are contamination and problems relating to molecular biology procedures (e.g., sequencing, cloning). Retractions due to contamination were more common in the past, whereas analytical errors are now increasing in frequency. A number of publications that have not been retracted despite being shown to contain significant errors suggest that barriers to retraction may impede correction of the literature. In particular, few cases of retraction due to cell line contamination were found despite recognition that this problem has affected numerous publications. An understanding of the errors leading to retraction can guide practices to improve laboratory research and the integrity of the scientific literature. Perhaps most important, our analysis has identified major problems in the mechanisms used to rectify the scientific literature and suggests a need for action by the scientific community to adopt protocols that ensure the integrity of the publication process.

  3. Detection and frequency tracking of chirping signals

    SciTech Connect

    Elliott, G.R.; Stearns, S.D.

    1990-08-01

    This paper discusses several methods to detect the presence of and track the frequency of a chirping signal in broadband noise. The dynamic behavior of each of the methods is described and tracking error bounds are investigated in terms of the chirp rate. Frequency tracking and behavior in the presence of varying levels of noise are illustrated in examples. 11 refs., 29 figs.

  4. The CO2 laser frequency stability measurements

    NASA Technical Reports Server (NTRS)

    Johnson, E. H., Jr.

    1973-01-01

    Carbon dioxide laser frequency stability data are considered for a receiver design that relates to maximum Doppler frequency and its rate of change. Results show that an adequate margin exists in terms of data acquisition, Doppler tracking, and bit error rate as they relate to laser stability and transmitter power.

  5. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  6. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  7. Angle interferometer cross axis errors

    NASA Astrophysics Data System (ADS)

    Bryan, J. B.; Carter, D. L.; Thompson, S. L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them.

  8. Discretization errors in particle tracking

    NASA Astrophysics Data System (ADS)

    Carmon, G.; Mamman, N.; Feingold, M.

    2007-03-01

    High precision video tracking of microscopic particles is limited by systematic and random errors. Systematic errors are partly due to the discretization process both in position and in intensity. We study the behavior of such errors in a simple tracking algorithm designed for the case of symmetric particles. This symmetry algorithm uses interpolation to estimate the value of the intensity at arbitrary points in the image plane. We show that the discretization error is composed of two parts: (1) the error due to the discretization of the intensity, bD and (2) that due to interpolation, bI. While bD behaves asymptotically like N-1 where N is the number of intensity gray levels, bI is small when using cubic spline interpolation.

  9. Errors Associated with IV Infusions in Critical Care

    PubMed Central

    Summa-Sorgini, Claudia; Fernandes, Virginia; Lubchansky, Stephanie; Mehta, Sangeeta; Hallett, David; Bailie, Toni; Lapinsky, Stephen E; Burry, Lisa

    2012-01-01

    Background All medication errors are serious, but those associated with the IV route of administration often result in the most severe outcomes. According to the literature, IV medications are associated with 54% of potential adverse events, and 56% of medication errors. Objectives To determine the type and frequency of errors associated with prescribing, documenting, and administering IV infusions, and to also determine if a correlation exists between the incidence of errors and either the time of day (day versus night) or the day of the week (weekday versus weekend) in an academic medicosurgical intensive care unit without computerized order entry or documentation. Methods As part of a quality improvement initiative, a prospective, observational audit was conducted for all IV infusions administered to critically ill patients during 40 randomly selected shifts over a 7-month period in 2007. For each IV infusion, data were collected from 3 sources: direct observation of administration of the medication to the patient, the medication administration record, and the patient’s medical chart. The primary outcome was the occurrence of any infusion-related errors, defined as any errors of omission or commission in the context of IV medication therapy that harmed or could have harmed the patient. Results It was determined that up to 21 separate errors might occur in association with a single dose of an IV medication. In total, 1882 IV infusions were evaluated, and 5641 errors were identified. Omissions or discrepancies related to documentation accounted for 92.7% of all errors. The most common errors identified via each of the 3 data sources were incomplete labelling of IV tubing (1779 or 31.5% of all errors), omission of infusion diluent from the medication administration record (474 or 8.4% of all errors), and discrepancy between the medication order as recorded in the patient’s chart and the IV medication that was being infused (105 or 1.9% of all errors

  10. Empirical Error Analysis of GPS RO Atmospheric Profiles

    NASA Astrophysics Data System (ADS)

    Scherllin-Pirscher, B.; Steiner, A. K.; Foelsche, U.; Kirchengast, G.; Kuo, Y.

    2010-12-01

    In the upper troposphere and lower stratosphere (UTLS) region the radio occultation (RO) technique provides accurate profiles of atmospheric parameters. These profiles can be used in operational meteorology (i.e., numerical weather prediction), atmospheric and climate research. We present results of an empirical error analysis of GPS RO data retrieved at UCAR and at WEGC and compare data characteristics of CHAMP, GRACE-A, and Formosat-3/COSMIC. Retrieved atmospheric profiles of bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature are compared to reference profiles extracted from ECMWF analysis fields. This statistical error characterization yields a combined (RO observational plus ECMWF model) error. We restrict our analysis to the years 2007 to 2009 due to known ECMWF deficiencies prior to 2007 (e.g., deficiencies in the representation of the austral polar vortex or the weak representation of tropopause height variability). The GPS RO observational error is determined by subtracting the estimated ECMWF error from the combined error in terms of variances. Our results indicate that the estimated ECMWF error and the GPS RO observational error are approximately of the same order of magnitude. Differences between different satellites are small below 35 km. The GPS RO observational error features latitudinal and seasonal variations, which are most pronounced at stratospheric altitudes at high latitudes. We present simplified models for the observational error, which depend on a few parameters only (Steiner and Kirchengast, JGR 110, D15307, 2005). These global error models are derived from fitting simple analytical functions to the GPS RO observational error. From the lower troposphere up to the tropopause, the model error decreases closely proportional to an inverse height law. Within a core "tropopause region" of the upper troposphere/lower stratosphere the model error is constant and above this region it increases exponentially with

  11. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique.

  12. Laser tracker error determination using a network measurement

    NASA Astrophysics Data System (ADS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-04-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies.

  13. Error prediction for probes guided by means of fixtures

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, J. Michael

    2012-02-01

    Probe guides are surgical fixtures that are rigidly attached to bone anchors in order to place a probe at a target with high accuracy (RMS error < 1 mm). Applications include needle biopsy, the placement of electrodes for deep-brain stimulation (DBS), spine surgery, and cochlear implant surgery. Targeting is based on pre-operative images, but targeting errors can arise from three sources: (1) anchor localization error, (2) guide fabrication error, and (3) external forces and torques. A well-established theory exists for the statistical prediction of target registration error (TRE) when targeting is accomplished by means of tracked probes, but no such TRE theory is available for fixtured probe guides. This paper provides that theory and shows that all three error sources can be accommodated in a remarkably simple extension of existing theory. Both the guide and the bone with attached anchors are modeled as objects with rigid sections and elastic sections, the latter of which are described by stiffness matrices. By relating minimization of elastic energy for guide attachment to minimization of fiducial registration error for point registration, it is shown that the expression for targeting error for the guide is identical to that for weighted rigid point registration if the weighting matrices are properly derived from stiffness matrices and the covariance matrices for fiducial localization are augmented with offsets in the anchor positions. An example of the application of the theory is provided for ear surgery.

  14. Standard Errors of Equating for the Percentile Rank-Based Equipercentile Equating with Log-Linear Presmoothing

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2009-01-01

    Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…

  15. An estimation error bound for pixelated sensing

    NASA Astrophysics Data System (ADS)

    Kreucher, Chris; Bell, Kristine

    2016-05-01

    This paper considers the ubiquitous problem of estimating the state (e.g., position) of an object based on a series of noisy measurements. The standard approach is to formulate this problem as one of measuring the state (or a function of the state) corrupted by additive Gaussian noise. This model assumes both (i) the sensor provides a measurement of the true target (or, alternatively, a separate signal processing step has eliminated false alarms), and (ii) The error source in the measurement is accurately described by a Gaussian model. In reality, however, sensor measurement are often formed on a grid of pixels - e.g., Ground Moving Target Indication (GMTI) measurements are formed for a discrete set of (angle, range, velocity) voxels, and EO imagery is made on (x, y) grids. When a target is present in a pixel, therefore, uncertainty is not Gaussian (instead it is a boxcar function) and unbiased estimation is not generally possible as the location of the target within the pixel defines the bias of the estimator. It turns out that this small modification to the measurement model makes traditional bounding approaches not applicable. This paper discusses pixelated sensing in more detail and derives the minimum mean squared error (MMSE) bound for estimation in the pixelated scenario. We then use this error calculation to investigate the utility of using non-thresholded measurements.

  16. Errors Associated With Measurements from Imaging Probes

    NASA Astrophysics Data System (ADS)

    Heymsfield, A.; Bansemer, A.

    2015-12-01

    Imaging probes, collecting data on particles from about 20 or 50 microns to several centimeters, are the probes that have been collecting data on the droplet and ice microphysics for more than 40 years. During that period, a number of problems associated with the measurements have been identified, including questions about the depth of field of particles within the probes' sample volume, and ice shattering, among others, have been identified. Many different software packages have been developed to process and interpret the data, leading to differences in the particle size distributions and estimates of the extinction, ice water content and radar reflectivity obtained from the same data. Given the numerous complications associated with imaging probe data, we have developed an optical array probe simulation package to explore the errors that can be expected with actual data. We simulate full particle size distributions with known properties, and then process the data with the same software that is used to process real-life data. We show that there are significant errors in the retrieved particle size distributions as well as derived parameters such as liquid/ice water content and total number concentration. Furthermore, the nature of these errors change as a function of the shape of the simulated size distribution and the physical and electronic characteristics of the instrument. We will introduce some methods to improve the retrieval of particle size distributions from real-life data.

  17. Error propagation in energetic carrying capacity models

    USGS Publications Warehouse

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  18. Shape error analysis for reflective nano focusing optics

    SciTech Connect

    Modi, Mohammed H.; Idir, Mourad

    2010-06-23

    Focusing performance of reflective x-ray optics is determined by surface figure accuracy. Any surface imperfection present on such optics introduces a phase error in the outgoing wave fields. Therefore converging beam at the focal spot will differ from the desired performance. Effect of these errors on focusing performance can be calculated by wave optical approach considering a coherent wave field illumination of optical elements. We have developed a wave optics simulator using Fresnel-Kirchhoff diffraction integral to calculate the mirror pupil function. Both analytically calculated and measured surface topography data can be taken as an aberration source to outgoing wave fields. Simulations are performed to study the effect of surface height fluctuations on focusing performances over wide frequency range in high, mid and low frequency band. The results using real shape profile measured with long trace profilometer (LTP) suggest that the shape error of {lambda}/4 PV (peak to valley) is tolerable to achieve diffraction limited performance. It is desirable to remove shape error of very low frequency as 0.1 mm{sup -1} which otherwise will generate beam waist or satellite peaks. All other frequencies above this limit will not affect the focused beam profile but only caused a loss in intensity.

  19. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results (Part I): Earths Radiation Budget

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.

  20. Mars gravitational field estimation error

    NASA Technical Reports Server (NTRS)

    Compton, H. R.; Daniels, E. F.

    1972-01-01

    The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.

  1. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  2. Error bounds in cascading regressions

    USGS Publications Warehouse

    Karlinger, M.R.; Troutman, B.M.

    1985-01-01

    Cascading regressions is a technique for predicting a value of a dependent variable when no paired measurements exist to perform a standard regression analysis. Biases in coefficients of a cascaded-regression line as well as error variance of points about the line are functions of the correlation coefficient between dependent and independent variables. Although this correlation cannot be computed because of the lack of paired data, bounds can be placed on errors through the required properties of the correlation coefficient. The potential meansquared error of a cascaded-regression prediction can be large, as illustrated through an example using geomorphologic data. ?? 1985 Plenum Publishing Corporation.

  3. Derivation and generalization of the dispersion relation of rising-sun magnetron with sectorial and rectangular cavities

    SciTech Connect

    Shi, Di-Fu; Qian, Bao-Liang; Wang, Hong-Gang; Li, Wei

    2013-12-15

    Field analysis method is used to derive the dispersion relation of rising-sun magnetron with sectorial and rectangular cavities. This dispersion relation is then extended to the general case in which the rising-sun magnetron can be with multi-group cavities of different shapes and sizes, and from which the dispersion relations of conventional magnetron, rising-sun magnetron, and magnetron-like device can be obtained directly. The results show that the relative errors between the theoretical and simulation values of the dispersion relation are less than 3%, the relative errors between the theoretical and simulation values of the cutoff frequencies of π mode are less than 2%. In addition, the influences of each structure parameter of the magnetron on the cutoff frequency of π mode and on the mode separation are investigated qualitatively and quantitatively, which may be of great interest to designing a frequency tuning magnetron.

  4. Resonance frequency and mass identification of zeptogram-scale nanosensor based on the nonlocal beam theory.

    PubMed

    Li, Xian-Fang; Tang, Guo-Jin; Shen, Zhi-Bin; Lee, Kang Yong

    2015-01-01

    Free vibration and mass detection of carbon nanotube-based sensors are studied in this paper. Since the mechanical properties of carbon nanotubes possess a size effect, the nonlocal beam model is used to characterize flexural vibration of nanosensors carrying a concentrated nanoparticle, where the size effect is reflected by a nonlocal parameter. For nanocantilever or bridged sensor, frequency equations are derived when a nanoparticle is carried at the free end or the middle, respectively. Exact resonance frequencies are numerically determined for clamped-free, simply-supported, and clamped-clamped resonators. Alternative approximations of fundamental frequency are given in closed form within the relative error less than 0.4%, 0.6%, and 1.4% for cantilever, simply-supported, and bridged sensors, respectively. Mass identification formulae are derived in terms of the frequency shift. Identified masses via the present approach coincide with those using the molecular mechanics approach and reach as low as 10(-24)kg. The obtained results indicate that the nonlocal effect decreases the resonance frequency except for the fundamental frequency of nanocantilever sensor. These results are helpful to the design of micro/nanomechanical zeptogram-scale biosensor.

  5. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    SciTech Connect

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D

    2015-06-15

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  6. Food frequency questionnaires.

    PubMed

    Pérez Rodrigo, Carmen; Aranceta, Javier; Salvador, Gemma; Varela-Moreiras, Gregorio

    2015-02-26

    Food Frequency Questionnaires are dietary assessment tools widely used in epidemiological studies investigating the relationship between dietary intake and disease or risk factors since the early '90s. The three main components of these questionnaires are the list of foods, frequency of consumption and the portion size consumed. The food list should reflect the food habits of the study population at the time the data is collected. The frequency of consumption may be asked by open ended questions or by presenting frequency categories. Qualitative Food Frequency Questionnaires do not ask about the consumed portions; semi-quantitative include standard portions and quantitative questionnaires ask respondents to estimate the portion size consumed either in household measures or grams. The latter implies a greater participant burden. Some versions include only close-ended questions in a standardized format, while others add an open section with questions about some specific food habits and practices and admit additions to the food list for foods and beverages consumed which are not included. The method can be self-administered, on paper or web-based, or interview administered either face-to-face or by telephone. Due to the standard format, especially closed-ended versions, and method of administration, FFQs are highly cost-effective thus encouraging its widespread use in large scale epidemiological cohort studies and also in other study designs. Coding and processing data collected is also less costly and requires less nutrition expertise compared to other dietary intake assessment methods. However, the main limitations are systematic errors and biases in estimates. Important efforts are being developed to improve the quality of the information. It has been recommended the use of FFQs with other methods thus enabling the adjustments required.

  7. Analysis of Omni-directivity Error of Electromagnetic Field Probe using Isotropic Antenna

    NASA Astrophysics Data System (ADS)

    Hartansky, Rene

    2016-12-01

    This manuscript analyzes the omni-directivity error of an electromagnetic field (EM) probe and its dependence on frequency. The global directional characteristic of a whole EM probe consists of three independent directional characteristics of EM sensors - one for each coordinate. The shape of particular directional characteristics is frequency dependent and so is the shape of the whole EM probe's global directional characteristic. This results in systematic error induced in the measurement of EM fields. This manuscript also contains quantitative formulation of such errors caused by the shape change of directional characteristics for different types of sensors depending on frequency and their mutual arrangement.

  8. A family-based likelihood ratio test for general pedigree structures that allows for genotyping error and missing data.

    PubMed

    Yang, Yang; Wise, Carol A; Gordon, Derek; Finch, Stephen J

    2008-01-01

    The purpose of this work is the development of a family-based association test that allows for random genotyping errors and missing data and makes use of information on affected and unaffected pedigree members. We derive the conditional likelihood functions of the general nuclear family for the following scenarios: complete parental genotype data and no genotyping errors; only one genotyped parent and no genotyping errors; no parental genotype data and no genotyping errors; and no parental genotype data with genotyping errors. We find maximum likelihood estimates of the marker locus parameters, including the penetrances and population genotype frequencies under the null hypothesis that all penetrance values are equal and under the alternative hypothesis. We then compute the likelihood ratio test. We perform simulations to assess the adequacy of the central chi-square distribution approximation when the null hypothesis is true. We also perform simulations to compare the power of the TDT and this likelihood-based method. Finally, we apply our method to 23 SNPs genotyped in nuclear families from a recently published study of idiopathic scoliosis (IS). Our simulations suggest that this likelihood ratio test statistic follows a central chi-square distribution with 1 degree of freedom under the null hypothesis, even in the presence of missing data and genotyping errors. The power comparison shows that this likelihood ratio test is more powerful than the original TDT for the simulations considered. For the IS data, the marker rs7843033 shows the most significant evidence for our method (p = 0.0003), which is consistent with a previous report, which found rs7843033 to be the 2nd most significant TDTae p value among a set of 23 SNPs.

  9. Addressee Errors in ATC Communications: The Call Sign Problem

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  10. Aging transition by random errors

    PubMed Central

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-01-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430

  11. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  12. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  13. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  14. Aging transition by random errors

    NASA Astrophysics Data System (ADS)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-02-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

  15. Quantum error correction for beginners.

    PubMed

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  16. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  17. Dominant modes via model error

    NASA Technical Reports Server (NTRS)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  18. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  19. Error-tradeoff and error-disturbance relations for incompatible quantum measurements.

    PubMed

    Branciard, Cyril

    2013-04-23

    Heisenberg's uncertainty principle is one of the main tenets of quantum theory. Nevertheless, and despite its fundamental importance for our understanding of quantum foundations, there has been some confusion in its interpretation: Although Heisenberg's first argument was that the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, standard uncertainty relations typically bound the indeterminacy of the outcomes when either one or the other observable is measured. In this paper, we quantify precisely Heisenberg's intuition. Even if two incompatible observables cannot be measured together, one can still approximate their joint measurement, at the price of introducing some errors with respect to the ideal measurement of each of them. We present a tight relation characterizing the optimal tradeoff between the error on one observable vs. the error on the other. As a particular case, our approach allows us to characterize the disturbance of an observable induced by the approximate measurement of another one; we also derive a stronger error-disturbance relation for this scenario.

  20. ORTHO- ELIMINATION OF TRACKING SYSTEM CLOCK ERRORS

    NASA Technical Reports Server (NTRS)

    Wu, J. T.

    1994-01-01

    ORTHO is part of the Global Positioning System (GPS) being developed by the U.S. Air Force, a navigational system that will use 18 NAVSTAR satellites to broadcast navigation messages and achieve worldwide coverage. The normal positioning technique uses one receiver which receives signals from at least four GPS satellites. For higher accuracy work it is often necessary to use a differential technique in which more than one receiver is used. The geodetic measurement has all receivers on the ground and allows the determination of the relative locations of the ground sites. The main application of the ORTHO program is in the elimination of clock errors in a GPS based tracking system. The measured distance (pseudo-range) from a GPS receiver contains errors due to differences in the receiver and satellite clocks. The conventional way of eliminating clock errors is to difference pseudo-ranges between different GPS satellites and receivers. The Householder transformation used in this program performs a function similar to the conventional single differencing or double differencing. This method avoids the problem of redundancy and correlation encountered in a differencing scheme. It is able to keep all information contained in the measurements within the scope of a least square estimation. For multiple transmitter and receiver GPS tracking network, this method is in general more accurate than the differencing technique. This program assumes that the non-clock measurement partial derivatives for the particular application are computed earlier by another program. With the partial derivatives and information to identify the transmitters and receivers as the input, the program performs the Householder transformation on the partial derivatives. The transformed partials are output by the program and may be used as an input to the filter program in the subsequent estimation process. Clock partial derivatives are generated internally and are not part of the input to the program