Science.gov

Sample records for frequency derived error

  1. Spatial frequency domain error budget

    SciTech Connect

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  2. Frequency-Tracking-Error Detector

    NASA Technical Reports Server (NTRS)

    Randall, Richard L.

    1990-01-01

    Frequency-tracking-error detector compares average period of output signal from band-pass tracking filter with average period of signal of frequency 100 f(sub 0) that controls center frequency f(sub 0) of tracking filter. Measures difference between f(sub 0) and frequency of one of periodic components in output of bearing sensor. Bearing sensor is accelerometer, strain gauge, or deflectometer mounted on bearing housing. Detector part of system of electronic equipment used to measure vibrations in bearings in rotating machinery.

  3. Derivational Morphophonology: Exploring Errors in Third Graders' Productions

    ERIC Educational Resources Information Center

    Jarmulowicz, Linda; Hay, Sarah E.

    2009-01-01

    Purpose: This study describes a post hoc analysis of segmental, stress, and syllabification errors in third graders' productions of derived English words with the stress-changing suffixes "-ity" and "-ic." We investigated whether (a) derived word frequency influences error patterns, (b) stress and syllabification errors always co-occur, and (c)…

  4. Evaluation and control of spatial frequency errors in reflective telescopes

    NASA Astrophysics Data System (ADS)

    Zhang, Xuejun; Zeng, Xuefeng; Hu, Haixiang; Zheng, Ligong

    2015-08-01

    In this paper, the influence on the image quality of manufacturing residual errors was studied. By analyzing the statistical distribution characteristics of the residual errors and their effects on PSF and MTF, we divided those errors into low, middle and high frequency domains using the unit "cycles per aperture". Two types of mid-frequency errors, algorithm intrinsic and tool path induced were analyzed. Control methods in current deterministic polishing process, such as MRF or IBF were presented.

  5. Compensation Low-Frequency Errors in TH-1 Satellite

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Renxiang; Hu, Xin

    2016-06-01

    The topographic mapping products at 1:50,000 scale can be realized using satellite photogrammetry without ground control points (GCPs), which requires the high accuracy of exterior orientation elements. Usually, the attitudes of exterior orientation elements are obtained from the attitude determination system on the satellite. Based on the theoretical analysis and practice, the attitude determination system exists not only the high-frequency errors, but also the low-frequency errors related to the latitude of satellite orbit and the time. The low-frequency errors would affect the location accuracy without GCPs, especially to the horizontal accuracy. In SPOT5 satellite, the latitudinal model was proposed to correct attitudes using approximately 20 calibration sites data, and the location accuracy was improved. The low-frequency errors are also found in Tian Hui 1 (TH-1) satellite. Then, the method of compensation low-frequency errors is proposed in ground image processing of TH-1, which can detect and compensate the low-frequency errors automatically without using GCPs. This paper deal with the low-frequency errors in TH-1: First, the analysis about low-frequency errors of the attitude determination system is performed. Second, the compensation models are proposed in bundle adjustment. Finally, the verification is tested using data of TH-1. The testing results show: the low-frequency errors of attitude determination system can be compensated during bundle adjustment, which can improve the location accuracy without GCPs and has played an important role in the consistency of global location accuracy.

  6. Antenna pointing systematic error model derivations

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.; Lansing, F. L.; Riggs, R.

    1987-01-01

    The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

  7. Frequency of Consonant Articulation Errors in Dysarthric Speech

    ERIC Educational Resources Information Center

    Kim, Heejin; Martin, Katie; Hasegawa-Johnson, Mark; Perlman, Adrienne

    2010-01-01

    This paper analyses consonant articulation errors in dysarthric speech produced by seven American-English native speakers with cerebral palsy. Twenty-three consonant phonemes were transcribed with diacritics as necessary in order to represent non-phoneme misarticulations. Error frequencies were examined with respect to six variables: articulatory…

  8. Error control coding for multi-frequency modulation

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.

    1990-06-01

    Multi-frequency modulation (MFM) has been developed at NPS using both quadrature-phase-shift-keyed (QPSK) and quadrature-amplitude-modulated (QAM) signals with good bit error performance at reasonable signal-to-noise ratios. Improved performance can be achieved by the introduction of error control coding. This report documents a FORTRAN simulation of the implementation of error control coding into an MFM communication link with additive white Gaussian noise. Four Reed-Solomon codes were incorporated, two for 16-QAM and two for 32-QAM modulation schemes. The error control codes used were modified from the conventional Reed-Solomon codes in that one information symbol was sacrificed to parity in order to use a simplified decoding algorithm which requires no iteration and enhances error detection capability. Bit error rates as a function of SNR and E(sub b)/N(sub 0) were analyzed, and bit error performance was weighed against reduction in information rate to determine the value of the codes.

  9. High Frequency of Imprinted Methylation Errors in Human Preimplantation Embryos

    PubMed Central

    White, Carlee R.; Denomme, Michelle M.; Tekpetey, Francis R.; Feyles, Valter; Power, Stephen G. A.; Mann, Mellissa R. W.

    2015-01-01

    Assisted reproductive technologies (ARTs) represent the best chance for infertile couples to conceive, although increased risks for morbidities exist, including imprinting disorders. This increased risk could arise from ARTs disrupting genomic imprints during gametogenesis or preimplantation. The few studies examining ART effects on genomic imprinting primarily assessed poor quality human embryos. Here, we examined day 3 and blastocyst stage, good to high quality, donated human embryos for imprinted SNRPN, KCNQ1OT1 and H19 methylation. Seventy-six percent day 3 embryos and 50% blastocysts exhibited perturbed imprinted methylation, demonstrating that extended culture did not pose greater risk for imprinting errors than short culture. Comparison of embryos with normal and abnormal methylation didn’t reveal any confounding factors. Notably, two embryos from male factor infertility patients using donor sperm harboured aberrant methylation, suggesting errors in these embryos cannot be explained by infertility alone. Overall, these results indicate that ART human preimplantation embryos possess a high frequency of imprinted methylation errors. PMID:26626153

  10. Error enhancement in geomagnetic models derived from scalar data

    NASA Technical Reports Server (NTRS)

    Stern, D. P.; Bredekamp, J. H.

    1974-01-01

    Models of the main geomagnetic field are generally represented by a scalar potential gamma expanded in a finite number of spherical harmonics. Very accurate observations of F were used, but indications exist that the accuracy of models derived from them is considerably lower. One problem is that F does not always characterize gamma uniquely. It is not clear whether such ambiguity can be encountered in deriving gamma from F in geomagnetic surveys, but there exists a connection, due to the fact that the counterexamples of Backus are related to the dipole field, while the geomagnetic field is dominated by its dipole component. If the models are recovered with a finite error (i.e. they cannot completely fit the data and consequently have a small spurious component), this connection allows the error in certain sequences of harmonic terms in gamma to be enhanced without unduly large effects on the fit of F to the model.

  11. Frequency analysis of photoplethysmogram and its derivatives.

    PubMed

    Elgendi, Mohamed; Fletcher, Richard R; Norton, Ian; Brearley, Matt; Abbott, Derek; Lovell, Nigel H; Schuurmans, Dale

    2015-12-01

    There are a limited number of studies on heat stress dynamics during exercise using the photoplethysmogram (PPG). We investigate the PPG signal and its derivatives for heat stress assessment using Welch (non-parametric) and autoregressive (parametric) spectral estimation methods. The preliminary results of this study indicate that applying the first and second derivatives to PPG waveforms is useful for determining heat stress level using 20-s recordings. Interestingly, Welch's and Yule-Walker's methods in agreement that the second derivative is an improved detector for heat stress. In fact, both spectral estimation methods showed a clear separation in the frequency domain between measurements before and after simulated heat-stress induction when the second derivative is applied. Moreover, the results demonstrate superior performance of the Welch's method over the Yule-Walker's method in separating before and after the three simulated heat-stress inductions. PMID:26498064

  12. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  13. Compensation of body shake errors in terahertz beam scanning single frequency holography for standoff personnel screening

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Chao; Sun, Zhao-Yang; Zhao, Yu; Wu, Shi-You; Fang, Guang-You

    2016-08-01

    In the terahertz (THz) band, the inherent shake of the human body may strongly impair the image quality of a beam scanning single frequency holography system for personnel screening. To realize accurate shake compensation in imaging processing, it is quite necessary to develop a high-precision measure system. However, in many cases, different parts of a human body may shake to different extents, resulting in greatly increasing the difficulty in conducting a reasonable measurement of body shake errors for image reconstruction. In this paper, a body shake error compensation algorithm based on the raw data is proposed. To analyze the effect of the body shake on the raw data, a model of echoed signal is rebuilt with considering both the beam scanning mode and the body shake. According to the rebuilt signal model, we derive the body shake error estimated method to compensate for the phase error. Simulation on the reconstruction of point targets with shake errors and proof-of-principle experiments on the human body in the 0.2-THz band are both performed to confirm the effectiveness of the body shake compensation algorithm proposed. Project supported by the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant No. YYYJ-1123).

  14. Jason-2 systematic error analysis in the GPS derived orbits

    NASA Astrophysics Data System (ADS)

    Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.

    2011-12-01

    Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced

  15. The testing of the aspheric mirror high-frequency band error

    NASA Astrophysics Data System (ADS)

    Wan, JinLong; Li, Bo; Li, XinNan

    2015-08-01

    In recent years, high frequency errors of mirror surface are taken seriously gradually. In manufacturing process of advanced telescope, there is clear indicator about high frequency errors. However, the sub-mirror off-axis aspheric telescope used is large. If uses the full aperture interferometers shape measurement, you need to use complex optical compensation device. Therefore, we propose a method to detect non-spherical lens based on the high-frequency stitching errors. This method does not use compensation components, only to measure Aperture sub-surface shape. By analyzing Zernike polynomial coefficients corresponding to the frequency errors, removing the previous 15 Zernike polynomials, then joining the surface shape, you can get full bore inside tested mirror high-frequency errors. 330mm caliber off-axis aspherical hexagon are measured with this method, obtain a complete face type of high-frequency surface errors and the feasibility of the approach.

  16. EEG error potentials detection and classification using time-frequency features for robot reinforcement learning.

    PubMed

    Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali

    2015-08-01

    In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier. PMID:26736619

  17. Nature and frequency of medication errors in a geriatric ward: an Indonesian experience

    PubMed Central

    Ernawati, Desak Ketut; Lee, Ya Ping; Hughes, Jeffery David

    2014-01-01

    Purpose To determine the nature and frequency of medication errors during medication delivery processes in a public teaching hospital geriatric ward in Bali, Indonesia. Methods A 20-week prospective study on medication errors occurring during the medication delivery process was conducted in a geriatric ward in a public teaching hospital in Bali, Indonesia. Participants selected were inpatients aged more than 60 years. Patients were excluded if they had a malignancy, were undergoing surgery, or receiving chemotherapy treatment. The occurrence of medication errors in prescribing, transcribing, dispensing, and administration were detected by the investigator providing in-hospital clinical pharmacy services. Results Seven hundred and seventy drug orders and 7,662 drug doses were reviewed as part of the study. There were 1,563 medication errors detected among the 7,662 drug doses reviewed, representing an error rate of 20.4%. Administration errors were the most frequent medication errors identified (59%), followed by transcription errors (15%), dispensing errors (14%), and prescribing errors (7%). Errors in documentation were the most common form of administration errors. Of these errors, 2.4% were classified as potentially serious and 10.3% as potentially significant. Conclusion Medication errors occurred in every stage of the medication delivery process, with administration errors being the most frequent. The majority of errors identified in the administration stage were related to documentation. Provision of in-hospital clinical pharmacy services could potentially play a significant role in detecting and preventing medication errors. PMID:24940067

  18. Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth

    PubMed Central

    Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan

    2015-01-01

    Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars. PMID:26347779

  19. Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.

    PubMed

    Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan

    2015-01-01

    Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars. PMID:26347779

  20. Bounding higher-order ionosphere errors for the dual-frequency GPS user

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Walter, T.; Blanch, J.; Enge, P.

    2008-10-01

    Civil signals at L2 and L5 frequencies herald a new phase of Global Positioning System (GPS) performance. Dual-frequency users typically assume a first-order approximation of the ionosphere index of refraction, combining the GPS observables to eliminate most of the ranging delay, on the order of meters, introduced into the pseudoranges. This paper estimates the higher-order group and phase errors that occur from assuming the ordinary first-order dual-frequency ionosphere model using data from the Federal Aviation Administration's Wide Area Augmentation System (WAAS) network on a solar maximum quiet day and an extremely stormy day postsolar maximum. We find that during active periods, when ionospheric storms may introduce slant range delays at L1 as high as 100 m, the higher-order group errors in the L1-L2 or L1-L5 dual-frequency combination can be tens of centimeters. The group and phase errors are no longer equal and opposite, so these errors accumulate in carrier smoothing of the dual-frequency code observable. We show the errors in the carrier-smoothed code are due to higher-order group errors and, to a lesser extent, to higher-order phase rate errors. For many applications, this residual error is sufficiently small as to be neglected. However, such errors can impact geodetic applications as well as the error budgets of GPS Augmentation Systems providing Category III precision approach.

  1. To Err is Normable: The Computation of Frequency-Domain Error Bounds from Time-Domain Data

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Veillette, Robert J.; DeAbreuGarcia, J. Alexis; Chicatelli, Amy; Hartmann, Richard

    1998-01-01

    This paper exploits the relationships among the time-domain and frequency-domain system norms to derive information useful for modeling and control design, given only the system step response data. A discussion of system and signal norms is included. The proposed procedures involve only simple numerical operations, such as the discrete approximation of derivatives and integrals, and the calculation of matrix singular values. The resulting frequency-domain and Hankel-operator norm approximations may be used to evaluate the accuracy of a given model, and to determine model corrections to decrease the modeling errors.

  2. "Coded and Uncoded Error Feedback: Effects on Error Frequencies in Adult Colombian EFL Learners' Writing"

    ERIC Educational Resources Information Center

    Sampson, Andrew

    2012-01-01

    This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…

  3. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  4. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  5. Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Minnett, P. J.

    2014-12-01

    Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.

  6. Frequency of medication errors in an emergency department of a large teaching hospital in southern Iran.

    PubMed

    Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid

    2014-01-01

    This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391

  7. Frequency of medication errors in an emergency department of a large teaching hospital in southern Iran

    PubMed Central

    Vazin, Afsaneh; Zamani, Zahra; Hatam, Nahid

    2014-01-01

    This study was conducted with the purpose of determining the frequency of medication errors (MEs) occurring in tertiary care emergency department (ED) of a large academic hospital in Iran. The incidence of MEs was determined through the disguised direct observation method conducted by a trained observer. A total of 1,031 medication doses administered to 202 patients admitted to the tertiary care ED were observed over a course of 54 6-hour shifts. Following collection of the data and analysis of the errors with the assistance of a clinical pharmacist, frequency of errors in the different stages was reported and analyzed in SPSS-21 software. For the 202 patients and the 1,031 medication doses evaluated in the present study, 707 (68.5%) MEs were recorded in total. In other words, 3.5 errors per patient and almost 0.69 errors per medication are reported to have occurred, with the highest frequency of errors pertaining to cardiovascular (27.2%) and antimicrobial (23.6%) medications. The highest rate of errors occurred during the administration phase of the medication use process with a share of 37.6%, followed by errors of prescription and transcription with a share of 21.1% and 10% of errors, respectively. Omission (7.6%) and wrong time error (4.4%) were the most frequent administration errors. The less-experienced nurses (P=0.04), higher patient-to-nurse ratio (P=0.017), and the morning shifts (P=0.035) were positively related to administration errors. Administration errors marked the highest share of MEs occurring in the different medication use processes. Increasing the number of nurses and employing the more experienced of them in EDs can help reduce nursing errors. Addressing the shortcomings with further research should result in reduction of MEs in EDs. PMID:25525391

  8. Packet error probabilities in frequency-hopped spread spectrum packet radio networks. Markov frequency hopping patterns considered

    NASA Astrophysics Data System (ADS)

    Georgiopoulos, M.; Kazakos, P.

    1987-09-01

    We compute the packet error probability induced in a frequency-hopped spread spectrum packet radio network, which utilizes first order Markov frequency hopping patterns. The frequency spectrum is divided into q frequency bins and the packets are divided into M bytes each. Every user in the network sends each of the M bytes of his packet at a frequency bin, which is different from the frequency bin used by the previous byte, but equally likely to be any one of the remaining q-1 frequency bins (Markov frequency hopping patterns). Furthermore, different users in the network utilize statistically independent frequency hopping patterns. Provided that, K users have simultaneously transmitted their packets on the channel, and a receiver has locked on to one of these K packets, we present a method for the computation of P sub e (K) (i.e. the probability that this packet is incorrectly decoded). Furthermore, we present numerical results (i.e. P sub e (K) versus K) for various values of the multiple access interference K, when Reed Solomon (RS) codes are used for the encoding of packets. Finally, some useful comparisons, with the packet error probability induced, if we assume that the byte errors are independent, are made; based on these comparisons, we can easily evaluate the performance of our spread spectrum system.

  9. Disentangling the impacts of outcome valence and outcome frequency on the post-error slowing

    PubMed Central

    Wang, Lijun; Tang, Dandan; Zhao, Yuanfang; Hitchman, Glenn; Wu, Shanshan; Tan, Jinfeng; Chen, Antao

    2015-01-01

    Post-error slowing (PES) reflects efficient outcome monitoring, manifested as slower reaction time after errors. Cognitive control account assumes that PES depends on error information, whereas orienting account posits that it depends on error frequency. This raises the question how the outcome valence and outcome frequency separably influence the generation of PES. To address this issue, we varied the probability of observation errors (50/50 and 20/80, correct/error) the “partner” committed by employing an observation-execution task and investigated the corresponding behavioral and neural effects. On each trial, participants first viewed the outcome of a flanker-run that was supposedly performed by a ‘partner’, and then performed a flanker-run themselves afterwards. We observed PES in the two error rate conditions. However, electroencephalographic data suggested error-related potentials (oERN and oPe) and rhythmic oscillation associated with attentional process (alpha band) were respectively sensitive to outcome valence and outcome frequency. Importantly, oERN amplitude was positively correlated with PES. Taken together, these findings support the assumption of the cognitive control account, suggesting that outcome valence and outcome frequency are both involved in PES. Moreover, the generation of PES is indexed by oERN, whereas the modulation of PES size could be reflected on the alpha band. PMID:25732237

  10. Error Bounds for Quadrature Methods Involving Lower Order Derivatives

    ERIC Educational Resources Information Center

    Engelbrecht, Johann; Fedotov, Igor; Fedotova, Tanya; Harding, Ansie

    2003-01-01

    Quadrature methods for approximating the definite integral of a function f(t) over an interval [a,b] are in common use. Examples of such methods are the Newton-Cotes formulas (midpoint, trapezoidal and Simpson methods etc.) and the Gauss-Legendre quadrature rules, to name two types of quadrature. Error bounds for these approximations involve…

  11. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  12. An Emprical Point Error Model for Tls Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  13. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  14. Frequency-domain correction of sensor dynamic error for step response.

    PubMed

    Yang, Shuang-Long; Xu, Ke-Jun

    2012-11-01

    To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly. PMID:23206091

  15. Frequency-domain correction of sensor dynamic error for step response

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Xu, Ke-Jun

    2012-11-01

    To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.

  16. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  17. Superconvergence of the derivative patch recovery technique and a posteriorii error estimation

    SciTech Connect

    Zhang, Z.; Zhu, J.Z.

    1995-12-31

    The derivative patch recovery technique developed by Zienkiewicz and Zhu for the finite element method is analyzed. It is shown that, for one dimensional problems and two dimensional problems using tensor product elements, the patch recovery technique yields superconvergence recovery for the derivatives. Consequently, the error estimator based on the recovered derivative is asymptotically exact.

  18. Online public reactions to frequency of diagnostic errors in US outpatient care

    PubMed Central

    Giardina, Traber Davis; Sarkar, Urmimala; Gourley, Gato; Modi, Varsha; Meyer, Ashley N.D.; Singh, Hardeep

    2016-01-01

    Background Diagnostic errors pose a significant threat to patient safety but little is known about public perceptions of diagnostic errors. A study published in BMJ Quality & Safety in 2014 estimated that diagnostic errors affect at least 5% of US adults (or 12 million) per year. We sought to explore online public reactions to media reports on the reported frequency of diagnostic errors in the US adult population. Methods We searched the World Wide Web for any news article reporting findings from the study. We then gathered all the online comments made in response to the news articles to evaluate public reaction to the newly reported diagnostic error frequency (n=241). Two coders conducted content analyses of the comments and an experienced qualitative researcher resolved differences. Results Overall, there were few comments made regarding the frequency of diagnostic errors. However, in response to the media coverage, 44 commenters shared personal experiences of diagnostic errors. Additionally, commentary centered on diagnosis-related quality of care as affected by two emergent categories: (1) US health care providers (n=79; 63 commenters) and (2) US health care reform-related policies, most commonly the Affordable Care Act (ACA) and insurance/reimbursement issues (n=62; 47 commenters). Conclusion The public appears to have substantial concerns about the impact of the ACA and other reform initiatives on the diagnosis-related quality of care. However, policy discussions on diagnostic errors are largely absent from the current national conversation on improving quality and safety. Because outpatient diagnostic errors have emerged as a major safety concern, researchers and policymakers should consider evaluating the effects of policy and practice changes on diagnostic accuracy. PMID:27347474

  19. Bit error rate performance of pi/4-DQPSK in a frequency-selective fast Rayleigh fading channel

    NASA Technical Reports Server (NTRS)

    Liu, Chia-Liang; Feher, Kamilo

    1991-01-01

    The bit error rate (BER) performance of pi/4-differential quadrature phase shift keying (DQPSK) modems in cellular mobile communication systems is derived and analyzed. The system is modeled as a frequency-selective fast Rayleigh fading channel corrupted by additive white Gaussian noise (AWGN) and co-channel interference (CCI). The probability density function of the phase difference between two consecutive symbols of M-ary differential phase shift keying (DPSK) signals is first derived. In M-ary DPSK systems, the information is completely contained in this phase difference. For pi/4-DQPSK, the BER is derived in a closed form and calculated directly. Numerical results show that for the 24 kBd (48 kb/s) pi/4-DQPSK operated at a carrier frequency of 850 MHz and C/I less than 20 dB, the BER will be dominated by CCI if the vehicular speed is below 100 mi/h. In this derivation, frequency-selective fading is modeled by two independent Rayleigh signal paths. Only one co-channel is assumed in this derivation. The results obtained are also shown to be valid for discriminator detection of M-ary DPSK signals.

  20. Computational procedures for evaluating the sensitivity derivatives of vibration frequencies and Eigenmodes of framed structures

    NASA Technical Reports Server (NTRS)

    Fetterman, Timothy L.; Noor, Ahmed K.

    1987-01-01

    Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.

  1. Efficient simulation for fixed-receiver bistatic SAR with time and frequency synchronization errors

    NASA Astrophysics Data System (ADS)

    Yan, Feifei; Chang, Wenge; Li, Xiangyang

    2015-12-01

    Raw signal simulation is a useful tool for synthetic aperture radar (SAR) system design, mission planning, processing algorithm testing, and inversion algorithm design. Time and frequency synchronization is the key technique of bistatic SAR (BiSAR) system, and raw data simulation is an effective tool for verifying the time and frequency synchronization techniques. According to the two-dimensional (2-D) frequency spectrum of fixed-receiver BiSAR, a rapid raw data simulation approach with time and frequency synchronization errors is proposed in this paper. Through 2-D inverse Stolt transform in 2-D frequency domain and phase compensation in range-Doppler frequency domain, this method can significantly improve the efficiency of scene raw data simulation. Simulation results of point targets and extended scene are presented to validate the feasibility and efficiency of the proposed simulation approach.

  2. Sparsity-based moving target localization using multiple dual-frequency radars under phase errors

    NASA Astrophysics Data System (ADS)

    Al Kadry, Khodour; Ahmad, Fauzia; Amin, Moeness G.

    2015-05-01

    In this paper, we consider moving target localization in urban environments using a multiplicity of dual-frequency radars. Dual-frequency radars offer the benefit of reduced complexity and fast computation time, thereby permitting real-time indoor target localization and tracking. The multiple radar units are deployed in a distributed system configuration, which provides robustness against target obscuration. We develop the dual-frequency signal model for the distributed radar system under phase errors and employ a joint sparse scene reconstruction and phase error correction technique to provide accurate target location and velocity estimates. Simulation results are provided that validate the performance of the proposed scheme under both full and reduced data volumes.

  3. A Derivation of the Unbiased Standard Error of Estimate: The General Case.

    ERIC Educational Resources Information Center

    O'Brien, Francis J., Jr.

    This paper is part of a series of applied statistics monographs intended to provide supplementary reading for applied statistics students. In the present paper, derivations of the unbiased standard error of estimate for both the raw score and standard score linear models are presented. The derivations for raw score linear models are presented in…

  4. Error analysis for semi-analytic displacement derivatives with respect to shape and sizing variables

    NASA Technical Reports Server (NTRS)

    Fenyes, Peter A.; Lust, Robert V.

    1989-01-01

    Sensitivity analysis is fundamental to the solution of structural optimization problems. Consequently, much research has focused on the efficient computation of static displacement derivatives. As originally developed, these methods relied on analytical representations for the derivatives of the structural stiffness matrix (K) with respect to the design variables (b sub i). To extend these methods for use with complex finite element formulations and facilitate their implementation into structural optimization programs using the general finite element method analysis codes, the semi-analytic method was developed. In this method the matrix the derivative of K/the derivative b sub i is approximated by finite difference. Although it is well known that the accuracy of the semi-analytic method is dependent on the finite difference parameter, recent work has suggested that more fundamental inaccuracies exist in the method when used for shape optimization. Another study has argued qualitatively that these errors are related to nonuniform errors in the stiffness matrix derivatives. The accuracy of the semi-analytic method is investigated. A general framework was developed for the error analysis and then it is shown analytically that the errors in the method are entirely accounted for by errors in delta K/delta b sub i. Furthermore, it is demonstrated that acceptable accuracy in the derivatives can be obtained through careful selection of the finite difference parameter.

  5. Impact of radar systematic error on the orthogonal frequency division multiplexing chirp waveform orthogonality

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Liang, Xingdong; Chen, Longyong; Ding, Chibiao

    2015-01-01

    Orthogonal frequency division multiplexing (OFDM) chirp waveform, which is composed of two successive identical linear frequency modulated subpulses, is a newly proposed orthogonal waveform scheme for multiinput multioutput synthetic aperture radar (SAR) systems. However, according to the waveform model, radar systematic error, which introduces phase or amplitude difference between the subpulses of the OFDM waveform, significantly degrades the orthogonality. The impact of radar systematic error on the waveform orthogonality is mainly caused by the systematic nonlinearity rather than the thermal noise or the frequency-dependent systematic error. Due to the influence of the causal filters, the first subpulse leaks into the second one. The leaked signal interacts with the second subpulse in the nonlinear components of the transmitter. This interaction renders a dramatic phase distortion in the beginning of the second subpulse. The resultant distortion, which leads to a phase difference between the subpulses, seriously damages the waveform's orthogonality. The impact of radar systematic error on the waveform orthogonality is addressed. Moreover, the impact of the systematic nonlinearity on the waveform is avoided by adding a standby between the subpulses. Theoretical analysis is validated by practical experiments based on a C-band SAR system.

  6. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  7. Error detection and correction for a multiple frequency quaternary phase shift keyed signal

    NASA Astrophysics Data System (ADS)

    Hopkins, Kevin S.

    1989-06-01

    A multiple frequency quaternary phased shift (MFQPSK) signaling system was developed and experimentally tested in a controlled environment. In order to insure that the quality of the received signal is such that information recovery is possible, error detection/correction (EDC) must be used. Various EDC coding schemes available are reviewed and their application to the MFQPSK signal system is analyzed. Hamming, Golay, Bose-Chaudhuri-Hocquenghem (BCH), Reed-Solomon (R-S) block codes as well as convolutional codes are presented and analyzed in the context of specific MFQPSK system parameters. A computer program was developed in order to compute bit error probabilities as a function of signal to noise ratio. Results demonstrate that various EDC schemes are suitable for the MFQPSK signal structure, and that significant performance improvements are possible with the use of certain error correction codes.

  8. Robust nonstationary jammer mitigation for GPS receivers with instantaneous frequency error tolerance

    NASA Astrophysics Data System (ADS)

    Wang, Ben; Zhang, Yimin D.; Qin, Si; Amin, Moeness G.

    2016-05-01

    In this paper, we propose a nonstationary jammer suppression method for GPS receivers when the signals are sparsely sampled. Missing data samples induce noise-like artifacts in the time-frequency (TF) distribution and ambiguity function of the received signals, which lead to reduced capability and degraded performance in jammer signature estimation and excision. In the proposed method, a data-dependent TF kernel is utilized to mitigate the artifacts and sparse reconstruction methods are then applied to obtain instantaneous frequency (IF) estimation of the jammers. In addition, an error tolerance of the IF estimate is applied is applied to achieve robust jammer suppression performance in the presence of IF estimation inaccuracy.

  9. Minimizing high spatial frequency residual error in active space telescope mirrors

    NASA Astrophysics Data System (ADS)

    Gray, Thomas L.; Smith, Matthew W.; Cohan, Lucy E.; Miller, David W.

    2009-08-01

    The trend in future space telescopes is towards larger apertures, which provide increased sensitivity and improved angular resolution. Lightweight, segmented, rib-stiffened, actively controlled primary mirrors are an enabling technology, permitting large aperture telescopes to meet the mass and volume restrictions imposed by launch vehicles. Such mirrors, however, are limited in the extent to which their discrete surface-parallel electrostrictive actuators can command global prescription changes. Inevitably some amount of high spatial frequency residual error is added to the wavefront due to the discrete nature of the actuators. A parameterized finite element mirror model is used to simulate this phenomenon and determine designs that mitigate high spatial frequency residual errors in the mirror surface figure. Two predominant residual components are considered: dimpling induced by embedded actuators and print-through induced by facesheet polishing. A gradient descent algorithm is combined with the parameterized mirror model to allow rapid trade space navigation and optimization of the mirror design, yielding advanced design heuristics formulated in terms of minimum machinable rib thickness. These relationships produce mirrors that satisfy manufacturing constraints and minimize uncorrectable high spatial frequency error.

  10. A frequency-domain derivation of shot-noise

    NASA Astrophysics Data System (ADS)

    Rice, Frank

    2016-01-01

    A formula for shot-noise is derived in the frequency-domain. The derivation is complete and reasonably rigorous while being appropriate for undergraduate students; it models a sequence of random pulses using Fourier sine and cosine series, and requires some basic statistical concepts. The text here may serve as a pedagogic introduction to the spectral analysis of random processes and may prove useful to introduce students to the logic behind stochastic problems. The concepts of noise power spectral density and equivalent noise bandwidth are introduced.

  11. The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates

    PubMed Central

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  12. The use of neural networks in identifying error sources in satellite-derived tropical SST estimates.

    PubMed

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  13. Correction of phase-error for phase-resolved k-clocked optical frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Mo, Jianhua; Li, Jianan; de Boer, Johannes F.

    2012-01-01

    Phase-resolved optical frequency domain imaging (OFDI) has emerged as a promising technique for blood flow measurement in human tissues. Phase stability is essential for this technique to achieve high accuracy in flow velocity measurement. In OFDI systems that use k-clocking for the data acquisition, phase-error occurs due to jitter in the data acquisition electronics. We presented a statistical analysis of jitter represented as point shifts of the k-clocked spectrum. We demonstrated a real-time phase-error correction algorithm for phase-resolved OFDI. A 50 KHz wavelength-swept laser (Axsun Technologies) based balanced-detection OFDI system was developed centered at 1310 nm. To evaluate the performance of this algorithm, a stationary gold mirror was employed as sample for phase analysis. Furthermore, we implemented this algorithm for imaging of human skin. Good-quality skin structure and Doppler image can be observed in real-time after phase-error correction. The results show that the algorithm can effectively correct the jitter-induced phase error in OFDI system.

  14. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    SciTech Connect

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we propose a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.

  15. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGESBeta

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  16. Topological derivatives for fundamental frequencies of elastic bodies

    NASA Astrophysics Data System (ADS)

    Kobelev, Vladimir

    2016-01-01

    In this article a new method for topological optimization of fundamental frequencies of elastic bodies, which could be considered as an improvement on the bubble method, is introduced. The method is based on generalized topological derivatives. For a body with different types of inclusion the vector genus is introduced. The dimension of the genus is the number of different elastic properties of the inclusions being introduced. The disturbances of stress and strain fields in an elastic matrix due to a newly inserted elastic inhomogeneity are given explicitly in terms of the stresses and strains in the initial body. The iterative positioning of inclusions is carried out by determination of the preferable position of the new inhomogeneity at the extreme points of the characteristic function. The characteristic function was derived using Eshelby's method. The expressions for optimal ratios of the semi-axes of the ellipse and angular orientation of newly inserted infinitesimally small inclusions of elliptical form are derived in closed analytical form.

  17. An Empirically Derived Taxonomy of Factors Affecting Physicians' Willingness to Disclose Medical Errors

    PubMed Central

    Kaldjian, Lauris C; Jones, Elizabeth W; Rosenthal, Gary E; Tripp-Reimer, Toni; Hillis, Stephen L

    2006-01-01

    BACKGROUND Physician disclosure of medical errors to institutions, patients, and colleagues is important for patient safety, patient care, and professional education. However, the variables that may facilitate or impede disclosure are diverse and lack conceptual organization. OBJECTIVE To develop an empirically derived, comprehensive taxonomy of factors that affects voluntary disclosure of errors by physicians. DESIGN A mixed-methods study using qualitative data collection (structured literature search and exploratory focus groups), quantitative data transformation (sorting and hierarchical cluster analysis), and validation procedures (confirmatory focus groups and expert review). RESULTS Full-text review of 316 articles identified 91 impeding or facilitating factors affecting physicians' willingness to disclose errors. Exploratory focus groups identified an additional 27 factors. Sorting and hierarchical cluster analysis organized factors into 8 domains. Confirmatory focus groups and expert review relocated 6 factors, removed 2 factors, and modified 4 domain names. The final taxonomy contained 4 domains of facilitating factors (responsibility to patient, responsibility to self, responsibility to profession, responsibility to community), and 4 domains of impeding factors (attitudinal barriers, uncertainties, helplessness, fears and anxieties). CONCLUSIONS A taxonomy of facilitating and impeding factors provides a conceptual framework for a complex field of variables that affects physicians' willingness to disclose errors to institutions, patients, and colleagues. This taxonomy can be used to guide the design of studies to measure the impact of different factors on disclosure, to assist in the design of error-reporting systems, and to inform educational interventions to promote the disclosure of errors to patients. PMID:16918739

  18. Minimizing systematic errors in phytoplankton pigment concentration derived from satellite ocean color measurements

    SciTech Connect

    Martin, D.L.

    1992-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.

  19. Autocorrelation and Error Structure of Rainfall Derived from NEXRAD in Central and South Florida

    NASA Astrophysics Data System (ADS)

    Pathak, C. S.; Vieux, B. E.

    2007-12-01

    Motivation for this study comes from the South Florida Water Management District (District) who is responsible for managing water resources in 16-counties over a 46,439-square kilometer (17,930 square-mile) area. Near-real- time rainfall data are used in operation of approximately 3,000 kilometers (~1,800 miles) of canals, 22 major pump stations and 200 water control structures. The spatial extent of the District extends from Orlando to Key West and from the Gulf Coast to the Atlantic Ocean and contains major water features including Lake Okeechobee and the Everglades wetlands. Rainfall is a key factor in the water management decisions made by the District in real-time and through studies that rely on archival rainfall data derived from radar and rain gauge observations. Rainfall measurements are obtained from a combination of four NEXRAD radars and a rain gauge network that comprises 280 active rain gauge stations located in the more populated areas. Four NEXRAD (Next Generation Weather Radar) sites operated by the National Weather Service cover the region. Rain gauges are used for frequency analysis and for adjustment of the radar rainfall products. An optimization study of the rain gauge network is accomplished by removing gauges in areas of excess coverage, and by adding or moving rain gauges to gain a more even spatial distribution over the District. Rainfall fields measured at daily and hourly timesteps exhibit autocorrelation which can affect the network design subject to optimality constraints. This presentation will describe the autocorrelation and error structure found in rainfall measurements derived from rain gage and NEXRAD data. The data used in the analysis includes rain gage data and the NEXRAD rainfall data that was collected during 1995-2005 at 2 x 2 km resolution. A set of clusters of rain gages and a regular array of analysis blocks that were 20 x 20 km in size for the NEXRAD data were used to account for variability of the rainfall processes

  20. Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint

    SciTech Connect

    Florita, A.; Hodge, B. M.; Milligan, M.

    2012-08-01

    The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.

  1. Wrongful Conviction: Perceptions of Criminal Justice Professionals Regarding the Frequency of Wrongful Conviction and the Extent of System Errors

    ERIC Educational Resources Information Center

    Ramsey, Robert J.; Frank, James

    2007-01-01

    Drawing on a sample of 798 Ohio criminal justice professionals (police, prosecutors, defense attorneys, judges), the authors examine respondents' perceptions regarding the frequency of system errors (i.e., professional error and misconduct suggested by previous research to be associated with wrongful conviction), and wrongful felony conviction.…

  2. Influence of nonhomogeneous earth on the rms phase error and beam-pointing errors of large, sparse high-frequency receiving arrays

    NASA Astrophysics Data System (ADS)

    Weiner, M. M.

    1994-01-01

    The performance of ground-based high-frequency (HF) receiving arrays is reduced when the array elements have electrically small ground planes. The array rms phase error and beam-pointing errors, caused by multipath rays reflected from a nonhomogeneous Earth, are determined for a sparse array of elements that are modeled as Hertzian dipoles in close proximity to Earth with no ground planes. Numerical results are presented for cases of randomly distributed and systematically distributed Earth nonhomogeneities where one-half of vertically polarized array elements are located in proximity to one type of Earth and the remaining half are located in proximity to a second type of Earth. The maximum rms phase errors, for the cases examined, are 18 deg and 9 deg for randomly distributed and systematically distributed nonhomogeneities, respectively. The maximum beampointing errors are 0 and 0.3 beam widths for randomly distributed and systematically distributed nonhomogeneities, respectively.

  3. Estimates of ocean forecast error covariance derived from Hessian Singular Vectors

    NASA Astrophysics Data System (ADS)

    Smith, Kevin D.; Moore, Andrew M.; Arango, Hernan G.

    2015-05-01

    Experience in numerical weather prediction suggests that singular value decomposition (SVD) of a forecast can yield useful a priori information about the growth of forecast errors. It has been shown formally that SVD using the inverse of the expected analysis error covariance matrix to define the norm at initial time yields the Empirical Orthogonal Functions (EOFs) of the forecast error covariance matrix at the final time. Because of their connection to the 2nd derivative of the cost function in 4-dimensional variational (4D-Var) data assimilation, the initial time singular vectors defined in this way are often referred to as the Hessian Singular Vectors (HSVs). In the present study, estimates of ocean forecast errors and forecast error covariance were computed using SVD applied to a baroclinically unstable temperature front in a re-entrant channel using the Regional Ocean Modeling System (ROMS). An identical twin approach was used in which a truth run of the model was sampled to generate synthetic hydrographic observations that were then assimilated into the same model started from an incorrect initial condition using 4D-Var. The 4D-Var system was run sequentially, and forecasts were initialized from each ocean analysis. SVD was performed on the resulting forecasts to compute the HSVs and corresponding EOFs of the expected forecast error covariance matrix. In this study, a reduced rank approximation of the inverse expected analysis error covariance matrix was used to compute the HSVs and EOFs based on the Lanczos vectors computed during the 4D-Var minimization of the cost function. This has the advantage that the entire spectrum of HSVs and EOFs in the reduced space can be computed. The associated singular value spectrum is found to yield consistent and reliable estimates of forecast error variance in the space spanned by the EOFs. In addition, at long forecast lead times the resulting HSVs and companion EOFs are able to capture many features of the actual

  4. Estimation of Error in Western Pacific Geoid Heights Derived from Gravity Data Only

    NASA Astrophysics Data System (ADS)

    Peters, M. F.; Brozena, J. M.

    2012-12-01

    The goal of the Western Pacific Geoid estimation project was to generate geoid height models for regions in the Western Pacific Ocean, and formal error estimates for those geoid heights, using all available gravity data and statistical parameters of the quality of the gravity data. Geoid heights were to be determined solely from gravity measurements, as a gravimetric geoid model and error estimates for that model would have applications in oceanography and satellite altimetry. The general method was to remove the gravity field associated with a "lower" order spherical harmonic global gravity model from the regional gravity set; to fit a covariance model to the residual gravity, and then calculate the (residual) geoid heights and error estimates by least-squares collocation fit with residual gravity, available statistical estimates of the gravity and the covariance model. The geoid heights corresponding to the lower order spherical harmonic model can be added back to the heights from the residual gravity to produce a complete geoid height model. As input we requested from NGA all unclassified available gravity data in the western Pacific between 15° to 45° N and 105° to 141°W. The total data set that was used to model and estimate errors in gravimetric geoid comprised an unclassified, open file data set (540,012 stations), a proprietary airborne survey of Taiwan (19,234 stations), and unclassified NAVO SSP survey data (95,111 stations), for official use only. Various programs were adapted to the problem including N.K. Pavlis' HSYNTH program and the covariance fit program GPFIT and least-squares collocation program GPCOL from the GRAVSOFT package (Forsberg and Schering, 2008 version) which were modified to handle larger data sets, but in some regions data were still too numerous. Formulas were derived that could be used to block-mean the data in a statistically optimal sense and still retain the error estimates required for the collocation algorithm. Running the

  5. Demonstration of the frequency offset errors introduced by an incorrect setting of the Zeeman/magnetic field adjustment on the cesium beam frequency standard

    NASA Technical Reports Server (NTRS)

    Kaufmann, D. C.

    1976-01-01

    The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.

  6. Modeling work zone crash frequency by quantifying measurement errors in work zone length.

    PubMed

    Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet

    2013-06-01

    Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence. PMID:23563145

  7. Analysis of 454 sequencing error rate, error sources, and artifact recombination for detection of Low-frequency drug resistance mutations in HIV-1 DNA

    PubMed Central

    2013-01-01

    Background 454 sequencing technology is a promising approach for characterizing HIV-1 populations and for identifying low frequency mutations. The utility of 454 technology for determining allele frequencies and linkage associations in HIV infected individuals has not been extensively investigated. We evaluated the performance of 454 sequencing for characterizing HIV populations with defined allele frequencies. Results We constructed two HIV-1 RT clones. Clone A was a wild type sequence. Clone B was identical to clone A except it contained 13 introduced drug resistant mutations. The clones were mixed at ratios ranging from 1% to 50% and were amplified by standard PCR conditions and by PCR conditions aimed at reducing PCR-based recombination. The products were sequenced using 454 pyrosequencing. Sequence analysis from standard PCR amplification revealed that 14% of all sequencing reads from a sample with a 50:50 mixture of wild type and mutant DNA were recombinants. The majority of the recombinants were the result of a single crossover event which can happen during PCR when the DNA polymerase terminates synthesis prematurely. The incompletely extended template then competes for primer sites in subsequent rounds of PCR. Although less often, a spectrum of other distinct crossover patterns was also detected. In addition, we observed point mutation errors ranging from 0.01% to 1.0% per base as well as indel (insertion and deletion) errors ranging from 0.02% to nearly 50%. The point errors (single nucleotide substitution errors) were mainly introduced during PCR while indels were the result of pyrosequencing. We then used new PCR conditions designed to reduce PCR-based recombination. Using these new conditions, the frequency of recombination was reduced 27-fold. The new conditions had no effect on point mutation errors. We found that 454 pyrosequencing was capable of identifying minority HIV-1 mutations at frequencies down to 0.1% at some nucleotide positions. Conclusion

  8. Effect of mid- and high-spatial frequencies on optical performance. [surface error effects on reflecting telescopes

    NASA Technical Reports Server (NTRS)

    Noll, R. J.

    1979-01-01

    In many of today's telescopes the effects of surface errors on image quality and scattered light are very important. The influence of optical fabrication surface errors on the performance of an optical system is discussed. The methods developed by Hopkins (1957) for aberration tolerancing and Barakat (1972) for random wavefront errors are extended to the examination of mid- and high-spatial frequency surface errors. The discussion covers a review of the basic concepts of image quality, an examination of manufacturing errors as a function of image quality performance, a demonstration of mirror scattering effects in relation to surface errors, and some comments on the nature of the correlation functions. Illustrative examples are included.

  9. Lexical Frequency and Third-Graders' Stress Accuracy in Derived English Word Production

    ERIC Educational Resources Information Center

    Jarmulowicz, Linda; Taran, Valentina L.; Hay, Sarah E.

    2008-01-01

    This study examined the effects of lexical frequency on children's production of accurate primary stress in words derived with nonneutral English suffixes. Forty-four third-grade children participated in an elicited derived word task in which they produced high-frequency, low-frequency, and nonsense-derived words with stress-changing suffixes…

  10. Systematic vertical error in UAV-derived topographic models: Origins and solutions

    NASA Astrophysics Data System (ADS)

    James, Mike R.; Robson, Stuart

    2014-05-01

    Unmanned aerial vehicles (UAVs) equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs). However, although such DEMs may achieve centimetric detail, they can also display broad-scale systematic deformation (usually a vertical 'doming') that restricts their wider use. This effect can be particularly apparent in DEMs derived by structure-from-motion (SfM) processing, especially when control point data have not been incorporated in the bundle adjustment process. We illustrate that doming error results from a combination of inaccurate description of radial lens distortion and the use of imagery captured in near-parallel viewing directions. With such imagery, enabling camera self-calibration within the processing inherently leads to erroneous radial distortion values and associated DEM error. Using a simulation approach, we illustrate how existing understanding of systematic DEM error in stereo-pairs (from unaccounted radial distortion) up-scales in typical multiple-image blocks of UAV surveys. For image sets with dominantly parallel viewing directions, self-calibrating bundle adjustment (as normally used with images taken using consumer cameras) will not be able to derive radial lens distortion accurately, and will give associated systematic 'doming' DEM deformation. In the presence of image measurement noise (at levels characteristic of SfM software), and in the absence of control measurements, our simulations display domed deformation with amplitude of ~2 m over horizontal distances of ~100 m. We illustrate the sensitivity of this effect to variations in camera angle and flight height. Deformation will be reduced if suitable control points can be included within the bundle adjustment, but residual systematic vertical error may remain, accommodated by the estimated precision of the control measurements. Doming bias can be minimised by the inclusion of inclined images within the image set, for example

  11. On the uncertainty of stream networks derived from elevation data: the error propagation approach

    NASA Astrophysics Data System (ADS)

    Hengl, T.; Heuvelink, G. B. M.; van Loon, E. E.

    2010-07-01

    DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise - usually areas of low local relief and slightly convex (0-10 difference from the mean value). In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy the required accuracy level. Such error propagation tool should become a standard functionality in any modern GIS. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the

  12. Assessment of errors in Precipitable Water data derived from Global Navigation Satellite System observations

    NASA Astrophysics Data System (ADS)

    Hordyniec, Pawel; Bosy, Jaroslaw; Rohm, Witold

    2015-07-01

    Among the new remote sensing techniques, one of the most promising is a GNSS meteorology, which provides continuous remote monitoring of the troposphere water vapor in all weather conditions with high temporal and spatial resolution. The Continuously Operating Reference Station (CORS) network and available meteorological instrumentation and models were scrutinized (we based our analysis on ASG-EUPOS network in Poland) as a troposphere water vapor retrieval system. This paper shows rigorous mathematical derivation of Precipitable Water errors based on uncertainties propagation method using all available data source quality measures (meteorological sensors and models precisions, ZTD estimation error, interpolation discrepancies, and ZWD to PW conversion inaccuracies). We analyze both random and systematic errors introduced by indirect measurements and interpolation procedures, hence estimate the PW system integrity capabilities. The results for PW show that the systematic errors can be under half-millimeter level as long as pressure and temperature are measured at the observation site. In other case, i.e. no direct observations, numerical weather model fields (we used in this study Coupled Ocean Atmospheric Mesoscale Prediction System) serves as the most accurate source of data. Investigated empirical pressure and temperature models, such as GPT2, GPT, UNB3m and Berg introduced into WV retrieval system, combined bias and random errors exceeding PW standard level of accuracy (3 mm according to E-GVAP report). We also found that the pressure interpolation procedure is introducing over 0.5 hPa bias and 1 hPa standard deviation into the system (important in Zenith Total Delay reduction) and hence has negative impact on the WV estimation quality.

  13. Frequency, Types, and Potential Clinical Significance of Medication-Dispensing Errors

    PubMed Central

    Bohand, Xavier; Simon, Laurent; Perrier, Eric; Mullot, Hélène; Lefeuvre, Leslie; Plotton, Christian

    2009-01-01

    INTRODUCTION AND OBJECTIVES: Many dispensing errors occur in the hospital, and these can endanger patients. The purpose of this study was to assess the rate of dispensing errors by a unit dose drug dispensing system, to categorize the most frequent types of errors, and to evaluate their potential clinical significance. METHODS: A prospective study using a direct observation method to detect medication-dispensing errors was used. From March 2007 to April 2007, “errors detected by pharmacists” and “errors detected by nurses” were recorded under six categories: unauthorized drug, incorrect form of drug, improper dose, omission, incorrect time, and deteriorated drug errors. The potential clinical significance of the “errors detected by nurses” was evaluated. RESULTS: Among the 734 filled medication cassettes, 179 errors were detected corresponding to a total of 7249 correctly fulfilled and omitted unit doses. An overall error rate of 2.5% was found. Errors detected by pharmacists and nurses represented 155 (86.6%) and 24 (13.4%) of the 179 errors, respectively. The most frequent types of errors were improper dose (n = 57, 31.8%) and omission (n = 54, 30.2%). Nearly 45% of the 24 errors detected by nurses had the potential to cause a significant (n = 7, 29.2%) or serious (n = 4, 16.6%) adverse drug event. CONCLUSIONS: Even if none of the errors reached the patients in this study, a 2.5% error rate indicates the need for improving the unit dose drug-dispensing system. Furthermore, it is almost certain that this study failed to detect some medication errors, further arguing for strategies to prevent their recurrence. PMID:19142545

  14. Effects of flight instrumentation errors on the estimation of aircraft stability and control derivatives. [including Monte Carlo analysis

    NASA Technical Reports Server (NTRS)

    Bryant, W. H.; Hodge, W. F.

    1974-01-01

    An error analysis program based on an output error estimation method was used to evaluate the effects of sensor and instrumentation errors on the estimation of aircraft stability and control derivatives. A Monte Carlo analysis was performed using simulated flight data for a high performance military aircraft, a large commercial transport, and a small general aviation aircraft for typical cruise flight conditions. The effects of varying the input sequence and combinations of the sensor and instrumentation errors were investigated. The results indicate that both the parameter accuracy and the corresponding measurement trajectory fit error can be significantly affected. Of the error sources considered, instrumentation lags and control measurement errors were found to be most significant.

  15. Investigating the error budget of tropical rainfall accumulations derived from combined passive microwave and infrared satellite measurements

    NASA Astrophysics Data System (ADS)

    Roca, R.; Chambon, P.; jobard, I.; Viltard, N.

    2012-04-01

    Measuring rainfall requires a high density of observations, which, over the whole tropical elt, can only be provided from space. For several decades, the availability of satellite observations has greatly increased; thanks to newly implemented missions like the Megha-Tropiques mission and the forthcoming GPM constellation, measurements from space become available from a set of observing systems. In this work, we focus on rainfall error estimations at the 1 °/1-day accumulated scale, key scale of meteorological and hydrological studies. A novel methodology for quantitative precipitation estimation is introduced; its name is TAPEER (Tropical Amount of Precipitation with an Estimate of ERrors) and it aims to provide 1 °/1-day rain accumulations and associated errors over the whole Tropical belt. This approach is based on a combination of infrared imagery from a fleet of geostationary satellites and passive microwave derived rain rates from a constellation of low earth orbiting satellites. A three-stage disaggregation of error into sampling, algorithmic and calibration errors is performed; the magnitudes of the three terms are then estimated separately. A dedicated error model is used to evaluate sampling errors and a forward error propagation approach is used for an estimation of algorithmic and calibration errors. One of the main findings in this study is the large contribution of the sampling errors and the algorithmic errors of BRAIN on medium rain rates (2 mm h-1 to 10 mm h-1) in the total error budget.

  16. Control of mid-spatial frequency errors considering the pad groove feature in smoothing polishing process.

    PubMed

    Nie, Xuqing; Li, Shengyi; Hu, Hao; Li, Qi

    2014-10-01

    Mid-spatial frequency error (MSFR) should be strictly controlled in modern optical systems. As an effective approach to suppress MSFR, the smoothing polishing (SP) process is not easy to handle because it can be affected by many factors. This paper mainly focuses on the influence of the pad groove, which has not been researched yet. The SP process is introduced, and the important role of the pad groove is explained in detail. The relationship between the contact pressure distribution and the groove feature including groove section type, groove width, and groove depth is established, and the optimized result is achieved with the finite element method. The different kinds of groove patterns are compared utilizing the numerical superposition method established scrupulously. The optimal groove is applied in the verification experiment conducted on a self-developed SP machine. The root mean square value of the MSFR after the SP process is diminished from 2.38 to 0.68 nm, which reveals that the selected pad can smooth out the MSFR to a great extent with proper SP parameters, while the newly generated MSFR due to the groove can be suppressed to a very low magnitude. PMID:25322215

  17. Error analysis for intrinsic quality factor measurement in superconducting radio frequency resonators

    NASA Astrophysics Data System (ADS)

    Melnychuk, O.; Grassellino, A.; Romanenko, A.

    2014-12-01

    In this paper, we discuss error analysis for intrinsic quality factor (Q0) and accelerating gradient (Eacc) measurements in superconducting radio frequency (SRF) resonators. The analysis is applicable for cavity performance tests that are routinely performed at SRF facilities worldwide. We review the sources of uncertainties along with the assumptions on their correlations and present uncertainty calculations with a more complete procedure for treatment of correlations than in previous publications [T. Powers, in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27]. Applying this approach to cavity data collected at Vertical Test Stand facility at Fermilab, we estimated total uncertainty for both Q0 and Eacc to be at the level of approximately 4% for input coupler coupling parameter β1 in the [0.5, 2.5] range. Above 2.5 (below 0.5) Q0 uncertainty increases (decreases) with β1 whereas Eacc uncertainty, in contrast with results in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27], is independent of β1. Overall, our estimated Q0 uncertainty is approximately half as large as that in Powers [in Proceedings of the 12th Workshop on RF Superconductivity, SuP02 (Elsevier, 2005), pp. 24-27].

  18. Derivative-based scale invariant image feature detector with error resilience.

    PubMed

    Mainali, Pradip; Lafruit, Gauthier; Tack, Klaas; Van Gool, Luc; Lauwereins, Rudy

    2014-05-01

    We present a novel scale-invariant image feature detection algorithm (D-SIFER) using a newly proposed scale-space optimal 10th-order Gaussian derivative (GDO-10) filter, which reaches the jointly optimal Heisenberg's uncertainty of its impulse response in scale and space simultaneously (i.e., we minimize the maximum of the two moments). The D-SIFER algorithm using this filter leads to an outstanding quality of image feature detection, with a factor of three quality improvement over state-of-the-art scale-invariant feature transform (SIFT) and speeded up robust features (SURF) methods that use the second-order Gaussian derivative filters. To reach low computational complexity, we also present a technique approximating the GDO-10 filters with a fixed-length implementation, which is independent of the scale. The final approximation error remains far below the noise margin, providing constant time, low cost, but nevertheless high-quality feature detection and registration capabilities. D-SIFER is validated on a real-life hyperspectral image registration application, precisely aligning up to hundreds of successive narrowband color images, despite their strong artifacts (blurring, low-light noise) typically occurring in such delicate optical system setups. PMID:24723627

  19. Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat.

    PubMed

    de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A; Kölzsch, Andrea; Prins, Herbert H T; de Boer, W Fred

    2015-01-01

    The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations

  20. Deriving Animal Behaviour from High-Frequency GPS: Tracking Cows in Open and Forested Habitat

    PubMed Central

    de Weerd, Nelleke; van Langevelde, Frank; van Oeveren, Herman; Nolet, Bart A.; Kölzsch, Andrea; Prins, Herbert H. T.; de Boer, W. Fred

    2015-01-01

    The increasing spatiotemporal accuracy of Global Navigation Satellite Systems (GNSS) tracking systems opens the possibility to infer animal behaviour from tracking data. We studied the relationship between high-frequency GNSS data and behaviour, aimed at developing an easily interpretable classification method to infer behaviour from location data. Behavioural observations were carried out during tracking of cows (Bos Taurus) fitted with high-frequency GPS (Global Positioning System) receivers. Data were obtained in an open field and forested area, and movement metrics were calculated for 1 min, 12 s and 2 s intervals. We observed four behaviour types (Foraging, Lying, Standing and Walking). We subsequently used Classification and Regression Trees to classify the simultaneously obtained GPS data as these behaviour types, based on distances and turning angles between fixes. GPS data with a 1 min interval from the open field was classified correctly for more than 70% of the samples. Data from the 12 s and 2 s interval could not be classified successfully, emphasizing that the interval should be long enough for the behaviour to be defined by its characteristic movement metrics. Data obtained in the forested area were classified with a lower accuracy (57%) than the data from the open field, due to a larger positional error of GPS locations and differences in behavioural performance influenced by the habitat type. This demonstrates the importance of understanding the relationship between behaviour and movement metrics, derived from GNSS fixes at different frequencies and in different habitats, in order to successfully infer behaviour. When spatially accurate location data can be obtained, behaviour can be inferred from high-frequency GNSS fixes by calculating simple movement metrics and using easily interpretable decision trees. This allows for the combined study of animal behaviour and habitat use based on location data, and might make it possible to detect deviations

  1. Spectral approximation methods and error estimates for Caputo fractional derivative with applications to initial-value problems

    NASA Astrophysics Data System (ADS)

    Duan, Beiping; Zheng, Zhoushun; Cao, Wen

    2016-08-01

    In this paper, we revisit two spectral approximations, including truncated approximation and interpolation for Caputo fractional derivative. The two approaches have been studied to approximate Riemann-Liouville (R-L) fractional derivative by Chen et al. and Zayernouri et al. respectively in their most recent work. For truncated approximation the reconsideration partly arises from the difference between fractional derivative in R-L sense and Caputo sense: Caputo fractional derivative requires higher regularity of the unknown than R-L version. Another reason for the reconsideration is that we distinguish the differential order of the unknown with the index of Jacobi polynomials, which is not presented in the previous work. Also we provide a way to choose the index when facing multi-order problems. By using generalized Hardy's inequality, the gap between the weighted Sobolev space involving Caputo fractional derivative and the classical weighted space is bridged, then the optimal projection error is derived in the non-uniformly Jacobi-weighted Sobolev space and the maximum absolute error is presented as well. For the interpolation, analysis of interpolation error was not given in their work. In this paper we build the interpolation error in non-uniformly Jacobi-weighted Sobolev space by constructing fractional inverse inequality. With combining collocation method, the approximation technique is applied to solve fractional initial-value problems (FIVPs). Numerical examples are also provided to illustrate the effectiveness of this algorithm.

  2. Radio Frequency Identification (RFID) in medical environment: Gaussian Derivative Frequency Modulation (GDFM) as a novel modulation technique with minimal interference properties.

    PubMed

    Rieche, Marie; Komenský, Tomás; Husar, Peter

    2011-01-01

    Radio Frequency Identification (RFID) systems in healthcare facilitate the possibility of contact-free identification and tracking of patients, medical equipment and medication. Thereby, patient safety will be improved and costs as well as medication errors will be reduced considerably. However, the application of RFID and other wireless communication systems has the potential to cause harmful electromagnetic disturbances on sensitive medical devices. This risk mainly depends on the transmission power and the method of data communication. In this contribution we point out the reasons for such incidents and give proposals to overcome these problems. Therefore a novel modulation and transmission technique called Gaussian Derivative Frequency Modulation (GDFM) is developed. Moreover, we carry out measurements to show the inteference properties of different modulation schemes in comparison to our GDFM. PMID:22254771

  3. The Vertical Error Characteristics of GOES-derived Winds: Description and Impact on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Rao, P. Anil; Velden, Christopher S.; Braun, Scott A.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Errors in the height assignment of some satellite-derived winds exist because the satellites sense radiation emitted from a finite layer of the atmosphere rather than a specific level. Potential problems in data assimilation may arise because the motion of a measured layer is often represented by a single-level value. In this research, cloud and water vapor motion winds that are derived from the Geostationary Operational Environmental Satellites (GOES winds) are compared to collocated rawinsonde observations (RAOBs). An important aspect of this work is that in addition to comparisons at each assigned height, the GOES winds are compared to the entire profile of the collocated RAOB data to determine the vertical error characteristics of the GOES winds. The impact of these results on numerical weather prediction is then investigated. The comparisons at individual vector height assignments indicate that the error of the GOES winds range from approx. 3 to 10 m/s and generally increase with height. However, if taken as a percentage of the total wind speed, accuracy is better at upper levels. As expected, comparisons with the entire profile of the collocated RAOBs indicate that clear-air water vapor winds represent deeper layers than do either infrared or water vapor cloud-tracked winds. This is because in cloud-free regions the signal from water vapor features may result from emittance over a thicker layer. To further investigate characteristics of the clear-air water vapor winds, they are stratified into two categories that are dependent on the depth of the layer represented by the vector. It is found that if the vertical gradient of moisture is smooth and uniform from near the height assignment upwards, the clear-air water vapor wind tends to represent a relatively deep layer. The information from the comparisons is then used in numerical model simulations of two separate events to determine the forecast impacts. Four simulations are performed for each case: 1) A

  4. STATISTICAL DISTRIBUTIONS OF PARTICULATE MATTER AND THE ERROR ASSOCIATED WITH SAMPLING FREQUENCY. (R828678C010)

    EPA Science Inventory

    The distribution of particulate matter (PM) concentrations has an impact on human health effects and the setting of PM regulations. Since PM is commonly sampled on less than daily schedules, the magnitude of sampling errors needs to be determined. Daily PM data from Spokane, W...

  5. MODIS Cloud Optical Property Retrieval Uncertainties Derived from Pixel-Level Radiometric Error Estimates

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong

    2011-01-01

    MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.

  6. Statistical Analysis of Instantaneous Frequency Scaling Factor as Derived From Optical Disdrometer Measurements At KQ Bands

    NASA Technical Reports Server (NTRS)

    Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo

    2016-01-01

    The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer [1]). However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link [2]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop

  7. Spatial accounting for errors in LiDAR-derived products: Snow volume and snow water equivalent estimation

    NASA Astrophysics Data System (ADS)

    Tinkham, W. T.; Hoffman, C. M.; Falkowski, M. J.; Smith, A. M.; Link, T. E.; Marshall, H.

    2011-12-01

    Light Detection and Ranging (LiDAR) has become one of the most effective and reliable means of characterizing surface topography and vegetation structure. Most LiDAR-derived estimates such as vegetation height, snow depth, and floodplain boundaries rely on the accurate creation of digital terrain models (DTM). As a result of the importance of an accurate DTM in using LiDAR data to estimate snow depth, it is necessary to understand the variables that influence the DTM accuracy in order to assess snow depth error. A series of 4 x 4 m plots that were surveyed at 0.5 m spacing in a semi-arid catchment were used for training the Random Forests algorithm along with a series of 35 variables in order to spatially predict vertical error within a LiDAR derived DTM. The final model was utilized to predict the combined error resulting from snow volume and snow water equivalent estimates derived from a snow-free LiDAR DTM and a snow-on LiDAR acquisition of the same site. The methodology allows for a statistical quantification of the spatially-distributed error patterns that are incorporated into the estimation of snow volume and snow water equivalents from LiDAR.

  8. Frequency and Distribution of Refractive Error in Adult Life: Methodology and Findings of the UK Biobank Study

    PubMed Central

    Cumberland, Phillippa M.; Bao, Yanchun; Hysi, Pirro G.; Foster, Paul J.; Hammond, Christopher J.; Rahi, Jugnoo S.

    2015-01-01

    Purpose To report the methodology and findings of a large scale investigation of burden and distribution of refractive error, from a contemporary and ethnically diverse study of health and disease in adults, in the UK. Methods U K Biobank, a unique contemporary resource for the study of health and disease, recruited more than half a million people aged 40–69 years. A subsample of 107,452 subjects undertook an enhanced ophthalmic examination which provided autorefraction data (a measure of refractive error). Refractive error status was categorised using the mean spherical equivalent refraction measure. Information on socio-demographic factors (age, gender, ethnicity, educational qualifications and accommodation tenure) was reported at the time of recruitment by questionnaire and face-to-face interview. Results Fifty four percent of participants aged 40–69 years had refractive error. Specifically 27% had myopia (4% high myopia), which was more common amongst younger people, those of higher socio-economic status, higher educational attainment, or of White or Chinese ethnicity. The frequency of hypermetropia increased with age (7% at 40–44 years increasing to 46% at 65–69 years), was higher in women and its severity was associated with ethnicity (moderate or high hypermetropia at least 30% less likely in non-White ethnic groups compared to White). Conclusions Refractive error is a significant public health issue for the UK and this study provides contemporary data on adults for planning services, health economic modelling and monitoring of secular trends. Further investigation of risk factors is necessary to inform strategies for prevention. There is scope to do this through the planned longitudinal extension of the UK Biobank study. PMID:26430771

  9. Error correction coding for frequency-hopping multiple-access spread spectrum communication systems

    NASA Technical Reports Server (NTRS)

    Healy, T. J.

    1982-01-01

    A communication system which would effect channel coding for frequency-hopped multiple-access is described. It is shown that in theory coding can increase the spectrum utilization efficiency of a system with mutual interference to 100 percent. Various coding strategies are discussed and some initial comparisons are given. Some of the problems associated with implementing the type of system described here are discussed.

  10. An analysis of perceptual errors in reading mammograms using quasi-local spatial frequency spectra.

    PubMed

    Mello-Thoms, C; Dunn, S M; Nodine, C F; Kundel, H L

    2001-09-01

    In this pilot study the authors examined areas on a mammogram that attracted the visual attention of experienced mammographers and mammography fellows, as well as areas that were reported to contain a malignant lesion, and, based on their spatial frequency spectrum, they characterized these areas by the type of decision outcome that they yielded: true-positives (TP), false-positives (FP), true-negatives (TN), and false-negatives (FN). Five 2-view (craniocaudal and medial-lateral oblique) mammogram cases were examined by 8 experienced observers, and the eye position of the observers was tracked. The observers were asked to report the location and nature of any malignant lesions present in the case. The authors analyzed each area in which either the observer made a decision or in which the observer had prolonged (>1,000 ms) visual dwell using wavelet packets, and characterized these areas in terms of the energy contents of each spatial frequency band. It was shown that each decision outcome is characterized by a specific profile in the spatial frequency domain, and that these profiles are significantly different from one another. As a consequence of these differences, the profiles can be used to determine which type of decision a given observer will make when examining the area. Computer-assisted perception correctly predicted up to 64% of the TPs made by the observers, 77% of the FPs, and 70% of the TNs. PMID:11720333

  11. Ocean data assimilation with background error covariance derived from OGCM outputs

    NASA Astrophysics Data System (ADS)

    Fu, Weiwei; Zhou, Guangqing; Wang, Huijun

    2004-04-01

    The background error covariance plays an important role in modern data assimilation and analysis systems by determining the spatial spreading of information in the data. A novel method based on model output is proposed to estimate background error covariance for use in Optimum Interpolation. At every model level, anisotropic correlation scales are obtained that give a more detailed description of the spatial correlation structure. Furthermore, the impact of the background field itself is included in the background error covariance. The methodology of the estimation is presented and the structure of the covariance is examined. The results of 20-year assimilation experiments are compared with observations from TOGA-TAO (The Tropical Ocean-Global Atmosphere-Tropical Atmosphere Ocean) array and other analysis data.

  12. Analysis of frequency-hopped packet radio networks with random signal levels. Part 1: Error-only decoding

    NASA Astrophysics Data System (ADS)

    Mohamed, Khairi Ashour; Pap, Laszlo

    1994-05-01

    This paper is concerned with the performance analysis of frequency-hopped packet radio networks with random signal levels. We assume that a hit from an interfering packet necessitates a symbol error if and only if it brings on enough energy that exceeds the energy received from the wanted signal. The interdependence between symbol errors of an arbitrary packet is taken into consideration through the joint probability generating function of the so-called effective multiple access interference. Slotted networks, with both random and deterministic hopping patterns, are considered in the case of both synchronous and asynchronous hopping. A general closed-form expression is given for packet capture probability, in the case of Reed-Solomon error only decoding. After introducing a general description method, the following examples are worked out in details: (1) networks with random spatial distribution of stations (a model for mobile packet radio networks); (2) networks operating in slow fading channels; (3) networks with different power levels which are chosen randomly according to either discrete or continuous probability distribution (created captures).

  13. Stable radio frequency phase delivery by rapid and endless post error cancellation.

    PubMed

    Wu, Zhongle; Dai, Yitang; Yin, Feifei; Xu, Kun; Li, Jianqiang; Lin, Jintong

    2013-04-01

    We propose and demonstrate a phase stabilization method for transfer and downconvert radio frequency (RF) signal from remote antenna to center station via a radio-over-fiber (ROF) link. Different from previous phase-locking-loop-based schemes, we post-correct any phase fluctuation by mixing during the downconversion process at the center station. A rapid and endless operation is predicted. The ROF technique transfers the received RF signal directly, which will reduce the electronic complexity at the antenna end. The proposed scheme is experimentally demonstrated, with a phase fluctuation compression factor of about 200. The theory and performance are also discussed. PMID:23546256

  14. Systematic Error in UAV-derived Topographic Models: The Importance of Control

    NASA Astrophysics Data System (ADS)

    James, M. R.; Robson, S.; d'Oleire-Oltmanns, S.

    2014-12-01

    UAVs equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs) for a wide variety of geoscience applications. Image processing and DEM-generation is being facilitated by parallel increases in the use of software based on 'structure from motion' algorithms. However, recent work [1] has demonstrated that image networks from UAVs, for which camera pointing directions are generally near-parallel, are susceptible to producing systematic error in the resulting topographic surfaces (a vertical 'doming'). This issue primarily reflects error in the camera lens distortion model, which is dominated by the radial K1 term. Common data processing scenarios, in which self-calibration is used to refine the camera model within the bundle adjustment, can inherently result in such systematic error via poor K1 estimates. Incorporating oblique imagery into such data sets can mitigate error by enabling more accurate calculation of camera parameters [1]. Here, using a combination of simulated image networks and real imagery collected from a fixed wing UAV, we explore the additional roles of external ground control and the precision of image measurements. We illustrate similarities and differences between a variety of structure from motion software, and underscore the importance of well distributed and suitably accurate control for projects where a demonstrated high accuracy is required. [1] James & Robson (2014) Earth Surf. Proc. Landforms, 39, 1413-1420, doi: 10.1002/esp.3609

  15. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  16. Error analysis in the digital elevation model of Kuwait desert derived from repeat pass synthetic aperture radar interferometry

    NASA Astrophysics Data System (ADS)

    Rao, Kota S.; Al Jassar, Hala K.

    2010-09-01

    The aim of this paper is to analyze the errors in the Digital Elevation Models (DEMs) derived through repeat pass SAR interferometry (InSAR). Out of 29 ASAR images available to us, 8 are selected for this study which has unique data set forming 7 InSAR pairs with single master image. The perpendicular component of baseline (B highmod) varies between 200 to 400 m to generate good quality DEMs. The Temporal baseline (T) varies from 35 days to 525 days to see the effect of temporal decorrelation. It is expected that all the DEMs be similar to each other spatially with in the noise limits. However, they differ very much with one another. The 7 DEMs are compared with the DEM of SRTM for the estimation of errors. The spatial and temporal distribution of errors in the DEM is analyzed by considering several case studies. Spatial and temporal variability of precipitable water vapour is analysed. Precipitable water vapour (PWV) corrections to the DEMs are implemented and found to have no significant effect. The reasons are explained. Temporal decorrelation of phases and soil moisture variations seem to have influence on the accuracy of the derived DEM. It is suggested that installing a number of corner reflectors (CRs) and the use of Permanent Scatter approach may improve the accuracy of the results in desert test sites.

  17. Improvement of Bit Error Rate in Holographic Data Storage Using the Extended High-Frequency Enhancement Filter

    NASA Astrophysics Data System (ADS)

    Kim, Do-Hyung; Cho, Janghyun; Moon, Hyungbae; Jeon, Sungbin; Park, No-Cheol; Yang, Hyunseok; Park, Kyoung-Su; Park, Young-Pil

    2013-09-01

    Optimized image restoration is suggested in angular-multiplexing-page-based holographic data storage. To improve the bit error rate (BER), an extended high frequency enhancement filter is recalculated from the point spread function (PSF) and Gaussian mask as the image restoration filter. Using the extended image restoration filter, the proposed system reduces the number of processing steps compared with the image upscaling method and provides better performance in BER and SNR. Numerical simulations and experiments were performed to verify the proposed method. The proposed system exhibited a marked improvement in BER from 0.02 to 0.002 for a Nyquist factor of 1.1, and from 0.006 to 0 for a Nyquist factor of 1.2. Moreover, more than 3 times faster performance in calculation time was achieved compared with image restoration with PSF upscaling owing to the reductions in the number of system process and calculation load.

  18. Error assessment of satellite-derived lead fraction in the Arctic

    NASA Astrophysics Data System (ADS)

    Ivanova, Natalia; Rampal, Pierre; Bouillon, Sylvain

    2016-03-01

    Leads within consolidated sea ice control heat exchange between the ocean and the atmosphere during winter, thus constituting an important climate parameter. These narrow elongated features occur when sea ice is fracturing under the action of wind and currents, reducing the local mechanical strength of the ice cover, which in turn impact the sea ice drift pattern. This creates a high demand for a high-quality lead fraction (LF) data set for sea ice model evaluation, initialization, and for the assimilation of such data in regional models. In this context, an available LF data set retrieved from satellite passive microwave observations (Advanced Microwave Scanning Radiometer - Earth Observing System, AMSR-E) is of great value, which has been providing pan-Arctic light- and cloud-independent daily coverage since 2002. In this study errors in this data set are quantified using accurate LF estimates retrieved from Synthetic Aperture Radar (SAR) images employing a threshold technique. A consistent overestimation of LF by a factor of 2-4 is found in the AMSR-E LF product. It is shown that a simple adjustment of the upper tie point used in the method to estimate the LF can reduce the pixel-wise error by a factor of 2 on average. Applying such an adjustment to the full data set may thus significantly increase the quality and value of the original data set.

  19. The use of ionospheric tomography and elevation masks to reduce the overall error in single-frequency GPS timing applications

    NASA Astrophysics Data System (ADS)

    Rose, Julian A. R.; Tong, Jenna R.; Allain, Damien J.; Mitchell, Cathryn N.

    2011-01-01

    Signals from Global Positioning System (GPS) satellites at the horizon or at low elevations are often excluded from a GPS solution because they experience considerable ionospheric delays and multipath effects. Their exclusion can degrade the overall satellite geometry for the calculations, resulting in greater errors; an effect known as the Dilution of Precision (DOP). In contrast, signals from high elevation satellites experience less ionospheric delays and multipath effects. The aim is to find a balance in the choice of elevation mask, to reduce the propagation delays and multipath whilst maintaining good satellite geometry, and to use tomography to correct for the ionosphere and thus improve single-frequency GPS timing accuracy. GPS data, collected from a global network of dual-frequency GPS receivers, have been used to produce four GPS timing solutions, each with a different ionospheric compensation technique. One solution uses a 4D tomographic algorithm, Multi-Instrument Data Analysis System (MIDAS), to compensate for the ionospheric delay. Maps of ionospheric electron density are produced and used to correct the single-frequency pseudorange observations. This method is compared to a dual-frequency solution and two other single-frequency solutions: one does not include any ionospheric compensation and the other uses the broadcast Klobuchar model. Data from the solar maximum year 2002 and October 2003 have been investigated to display results when the ionospheric delays are large and variable. The study focuses on Europe and results are produced for the chosen test site, VILL (Villafranca, Spain). The effects of excluding all of the GPS satellites below various elevation masks, ranging from 5° to 40°, on timing solutions for fixed (static) and mobile (moving) situations are presented. The greatest timing accuracies when using the fixed GPS receiver technique are obtained by using a 40° mask, rather than a 5° mask. The mobile GPS timing solutions are most

  20. Derivatives of buckling loads and vibration frequencies with respect to stiffness and initial strain parameters

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Cohen, Gerald A.; Mroz, Zenon

    1990-01-01

    A uniform variational approach to sensitivity analysis of vibration frequencies and bifurcation loads of nonlinear structures is developed. Two methods of calculating the sensitivities of bifurcation buckling loads and vibration frequencies of nonlinear structures, with respect to stiffness and initial strain parameters, are presented. A direct method requires calculation of derivatives of the prebuckling state with respect to these parameters. An adjoint method bypasses the need for these derivatives by using instead the strain field associated with the second-order postbuckling state. An operator notation is used and the derivation is based on the principle of virtual work. The derivative computations are easily implemented in structural analysis programs. This is demonstrated by examples using a general purpose, finite element program and a shell-of-revolution program.

  1. Assessment of Error in Synoptic-Scale Diagnostics Derived from Wind Profiler and Radiosonde Network Data

    NASA Technical Reports Server (NTRS)

    Mace, Gerald G.; Ackerman, Thomas P.

    1996-01-01

    A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.

  2. Austin Chalk fracture mapping using frequency data derived from seismic data

    NASA Astrophysics Data System (ADS)

    Najmuddin, Ilyas Juzer

    Frequency amplitude spectra derived from P-wave seismic data can be used to derive a fracture indicator. This fracture indicator can be used to delineate fracture zones in subsurface layers. Mapping fractures that have no vertical offset is difficult on seismic sections. Fracturing changes the rock properties and therefore the attributes of the seismic data reflecting off the fractured interface and data passing through the fractured layers. Fractures have a scattering effect on seismic energy reflected from the fractured layer. Fractures attenuate amplitudes of higher frequencies in seismic data preferentially than lower frequencies. The amplitude spectrum of the frequencies in the seismic data shifts towards lower frequencies when a spectrum from a time window above the fractured layer is compared with one below the fractured layer. This shift in amplitudes of frequency spectra can be derived from seismic data and used to indicate fracturing. A method is developed to calculate a parameter t* to measure this change in the frequency spectra for small time windows (100ms) above and below the fractured layer. The Austin Chalk in South Central Texas is a fractured layer, and it produces hydrocarbons from fracture zones with the layer (Sweet Spots). 2D and 3D P-wave seismic data are used from Burleson and Austin Counties in Texas to derive the t* parameter. Case studies are presented for 2D data from Burleson County and 3D data from Austin County. The t* parameter mapped on the 3D data shows a predominant fracture trend parallel to strike. The fracture zones have a good correlation with the faults interpreted on the top of Austin Chalk reflector. Production data in Burleson County (Giddings Field) is a proxy for fracturing. Values of t* mapped on the 2D data have a good correlation with the cumulative production map presented in this study.

  3. Thermoadaptation-Directed Enzyme Evolution in an Error-Prone Thermophile Derived from Geobacillus kaustophilus HTA426

    PubMed Central

    Kobayashi, Jyumpei; Wada, Keisuke; Furukawa, Megumi; Doi, Katsumi

    2014-01-01

    Thermostability is an important property of enzymes utilized for practical applications because it allows long-term storage and use as catalysts. In this study, we constructed an error-prone strain of the thermophile Geobacillus kaustophilus HTA426 and investigated thermoadaptation-directed enzyme evolution using the strain. A mutation frequency assay using the antibiotics rifampin and streptomycin revealed that G. kaustophilus had substantially higher mutability than Escherichia coli and Bacillus subtilis. The predominant mutations in G. kaustophilus were A · T→G · C and C · G→T · A transitions, implying that the high mutability of G. kaustophilus was attributable in part to high-temperature-associated DNA damage during growth. Among the genes that may be involved in DNA repair in G. kaustophilus, deletions of the mutSL, mutY, ung, and mfd genes markedly enhanced mutability. These genes were subsequently deleted to construct an error-prone thermophile that showed much higher (700- to 9,000-fold) mutability than the parent strain. The error-prone strain was auxotrophic for uracil owing to the fact that the strain was deficient in the intrinsic pyrF gene. Although the strain harboring Bacillus subtilis pyrF was also essentially auxotrophic, cells became prototrophic after 2 days of culture under uracil starvation, generating B. subtilis PyrF variants with an enhanced half-denaturation temperature of >10°C. These data suggest that this error-prone strain is a promising host for thermoadaptation-directed evolution to generate thermostable variants from thermolabile enzymes. PMID:25326311

  4. Thermoadaptation-directed enzyme evolution in an error-prone thermophile derived from Geobacillus kaustophilus HTA426.

    PubMed

    Suzuki, Hirokazu; Kobayashi, Jyumpei; Wada, Keisuke; Furukawa, Megumi; Doi, Katsumi

    2015-01-01

    Thermostability is an important property of enzymes utilized for practical applications because it allows long-term storage and use as catalysts. In this study, we constructed an error-prone strain of the thermophile Geobacillus kaustophilus HTA426 and investigated thermoadaptation-directed enzyme evolution using the strain. A mutation frequency assay using the antibiotics rifampin and streptomycin revealed that G. kaustophilus had substantially higher mutability than Escherichia coli and Bacillus subtilis. The predominant mutations in G. kaustophilus were A · T→G · C and C · G→T · A transitions, implying that the high mutability of G. kaustophilus was attributable in part to high-temperature-associated DNA damage during growth. Among the genes that may be involved in DNA repair in G. kaustophilus, deletions of the mutSL, mutY, ung, and mfd genes markedly enhanced mutability. These genes were subsequently deleted to construct an error-prone thermophile that showed much higher (700- to 9,000-fold) mutability than the parent strain. The error-prone strain was auxotrophic for uracil owing to the fact that the strain was deficient in the intrinsic pyrF gene. Although the strain harboring Bacillus subtilis pyrF was also essentially auxotrophic, cells became prototrophic after 2 days of culture under uracil starvation, generating B. subtilis PyrF variants with an enhanced half-denaturation temperature of >10°C. These data suggest that this error-prone strain is a promising host for thermoadaptation-directed evolution to generate thermostable variants from thermolabile enzymes. PMID:25326311

  5. Derivative of the light frequency shift as a measure of spacetime curvature for gravitational wave detection

    NASA Astrophysics Data System (ADS)

    Congedo, Giuseppe

    2015-04-01

    The measurement of frequency shifts for light beams exchanged between two test masses nearly in free fall is at the heart of gravitational-wave detection. It is envisaged that the derivative of the frequency shift is in fact limited by differential forces acting on those test masses. We calculate the derivative of the frequency shift with a fully covariant, gauge-independent and coordinate-free method. This method is general and does not require a congruence of nearby beams' null geodesics as done in previous work. We show that the derivative of the parallel transport is the only means by which gravitational effects shows up in the frequency shift. This contribution is given as an integral of the Riemann tensor—the only physical observable of curvature—along the beam's geodesic. The remaining contributions are the difference of velocities, the difference of nongravitational forces, and finally fictitious forces, either locally at the test masses or nonlocally integrated along the beam's geodesic. As an application relevant to gravitational-wave detection, we work out the frequency shift in the local Lorentz frame of nearby geodesics.

  6. GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm

    NASA Technical Reports Server (NTRS)

    Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.

    2003-01-01

    The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.

  7. Assessment of error in synoptic-scale diagnostics derived from wind profiler and radiosonde network data

    SciTech Connect

    Mace, G.G.; Ackerman, T.P.

    1996-07-01

    A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. The authors have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. It is concluded that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, the authors conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results. 18 refs., 9 figs., 6 tabs.

  8. Analysis of total copper, cadmium and lead in refuse-derived fuels (RDF): study on analytical errors using synthetic samples.

    PubMed

    Skutan, Stefan; Aschenbrenner, Philipp

    2012-12-01

    Components with extraordinarily high analyte contents, for example copper metal from wires or plastics stabilized with heavy metal compounds, are presumed to be a crucial source of errors in refuse-derived fuel (RDF) analysis. In order to study the error generation of those 'analyte carrier components', synthetic samples spiked with defined amounts of carrier materials were mixed, milled in a high speed rotor mill to particle sizes <1 mm, <0.5 mm and <0.2 mm, respectively, and analyzed repeatedly. Copper (Cu) metal and brass were used as Cu carriers, three kinds of polyvinylchloride (PVC) materials as lead (Pb) and cadmium (Cd) carriers, and paper and polyethylene as bulk components. In most cases, samples <0.2 mm delivered good recovery rates (rec), and low or moderate relative standard deviations (rsd), i.e. metallic Cu 87-91% rec, 14-35% rsd, Cd from flexible PVC yellow 90-92% rec, 8-10% rsd and Pb from rigid PVC 92-96% rec, 3-4% rsd. Cu from brass was overestimated (138-150% rec, 13-42% rsd), Cd from flexible PVC grey underestimated (72-75% rec, 4-7% rsd) in <0.2 mm samples. Samples <0.5 mm and <1 mm spiked with Cu or brass produced errors of up to 220% rsd (<0.5 mm) and 370% rsd (<1 mm). In the case of Pb from rigid PVC, poor recoveries (54-75%) were observed in spite of moderate variations (rsd 11-29%). In conclusion, time-consuming milling to <0.2 mm can reduce variation to acceptable levels, even given the presence of analyte carrier materials. Yet, the sources of systematic errors observed (likely segregation effects) remain uncertain. PMID:23027034

  9. Real-time soil flux measurements and calculations with CRDS + Soil Flux Processor: comparison among flux algorithms and derivation of whole system error

    NASA Astrophysics Data System (ADS)

    Alstad, K. P.; Venterea, R. T.; Tan, S. M.; Saad, N.

    2015-12-01

    Understanding chamber-based soil flux model fitting and measurement error is key to scaling soils GHG emissions and resolving the primary uncertainties in climate and management feedbacks at regional scales. One key challenge is the selection of the correct empirical model applied to soil flux rate analysis in chamber-based experiments. Another challenge is the characterization of error in the chamber measurement. Traditionally, most chamber-based N2O and CH4 measurements and model derivations have used discrete sampling for GC analysis, and have been conducted using extended chamber deployment periods (DP) which are expected to result in substantial alteration of the pre-deployment flux. The development of high-precision, high-frequency CRDS analyzers has advanced the science of soil flux analysis by facilitating much shorter DP and, in theory, less chamber-induced suppression of the soil-atmosphere diffusion gradient. As well, a new software tool developed by Picarro (the "Soil Flux Processor" or "SFP") links the power of Cavity Ring-Down Spectroscopy (CRDS) technology with an easy-to-use interface that features flexible sample-ID and run-schemes, and provides real-time monitoring of chamber accumulations and environmental conditions. The SFP also includes a sophisticated flux analysis interface which offers a user-defined model selection, including three predominant fit algorithms as default, and an open-code interface for user-composed algorithms. The SFP is designed to couple with the Picarro G2508 system, an analyzer which simplifies soils flux studies by simultaneously measuring primary GHG species -- N2O, CH4, CO2 and H2O. In this study, Picarro partners with the ARS USDA Soil & Water Management Research Unit (R. Venterea, St. Paul), to examine the degree to which the high-precision, high-frequency Picarro analyzer allows for much shorter DPs periods in chamber-based flux analysis, and, in theory, less chamber-induced suppression of the soil

  10. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  11. The use of high-resolution atmospheric simulations over mountainous terrain for deriving error correction functions of satellite precipitation products

    NASA Astrophysics Data System (ADS)

    Bartsotas, Nikolaos S.; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Kallos, George

    2015-04-01

    Mountainous regions account for a significant part of the Earth's surface. Such areas are persistently affected by heavy precipitation episodes, which induce flash floods and landslides. The limitation of inadequate in-situ observations has put remote sensing rainfall estimates on a pedestal concerning the analyses of these events, as in many mountainous regions worldwide they serve as the only available data source. However, well-known issues of remote sensing techniques over mountainous areas, such as the strong underestimation of precipitation associated with low-level orographic enhancement, limit the way these estimates can accommodate operational needs. Even locations that fall within the range of weather radars suffer from strong biases in precipitation estimates due to terrain blockage and vertical rainfall profile issues. A novel approach towards the reduction of error in quantitative precipitation estimates lies upon the utilization of high-resolution numerical simulations in order to derive error correction functions for corresponding satellite precipitation data. The correction functions examined consist of 1) mean field bias adjustment and 2) pdf matching, two procedures that are simple and have been widely used in gauge-based adjustment techniques. For the needs of this study, more than 15 selected storms over the mountainous Upper Adige region of Northern Italy were simulated at 1-km resolution from a state-of-the-art atmospheric model (RAMS/ICLAMS), benefiting from the explicit cloud microphysical scheme, prognostic treatment of natural pollutants such as dust and sea-salt and the detailed SRTM90 topography that are implemented in the model. The proposed error correction approach is applied on three quasi-global and widely used satellite precipitation datasets (CMORPH, TRMM 3B42 V7 and PERSIANN) and the evaluation of the error model is based on independent in situ precipitation measurements from a dense rain gauge network (1 gauge / 70 km2

  12. In Vitro Culture Increases the Frequency of Stochastic Epigenetic Errors at Imprinted Genes in Placental Tissues from Mouse Concepti Produced Through Assisted Reproductive Technologies1

    PubMed Central

    de Waal, Eric; Mak, Winifred; Calhoun, Sondra; Stein, Paula; Ord, Teri; Krapp, Christopher; Coutifaris, Christos; Schultz, Richard M.; Bartolomei, Marisa S.

    2014-01-01

    ABSTRACT Assisted reproductive technologies (ART) have enabled millions of couples with compromised fertility to conceive children. Nevertheless, there is a growing concern regarding the safety of these procedures due to an increased incidence of imprinting disorders, premature birth, and low birth weight in ART-conceived offspring. An integral aspect of ART is the oxygen concentration used during in vitro development of mammalian embryos, which is typically either atmospheric (∼20%) or reduced (5%). Both oxygen tension levels have been widely used, but 5% oxygen improves preimplantation development in several mammalian species, including that of humans. To determine whether a high oxygen tension increases the frequency of epigenetic abnormalities in mouse embryos subjected to ART, we measured DNA methylation and expression of several imprinted genes in both embryonic and placental tissues from concepti generated by in vitro fertilization (IVF) and exposed to 5% or 20% oxygen during culture. We found that placentae from IVF embryos exhibit an increased frequency of abnormal methylation and expression profiles of several imprinted genes, compared to embryonic tissues. Moreover, IVF-derived placentae exhibit a variety of epigenetic profiles at the assayed imprinted genes, suggesting that these epigenetic defects arise by a stochastic process. Although culturing embryos in both of the oxygen concentrations resulted in a significant increase of epigenetic defects in placental tissues compared to naturally conceived controls, we did not detect significant differences between embryos cultured in 5% and those cultured in 20% oxygen. Thus, further optimization of ART should be considered to minimize the occurrence of epigenetic errors in the placenta. PMID:24337315

  13. Low frequency vibrational modes of oxygenated myoglobin, hemoglobins, and modified derivatives.

    PubMed

    Jeyarajah, S; Proniewicz, L M; Bronder, H; Kincaid, J R

    1994-12-01

    The low frequency resonance Raman spectra of the dioxygen adducts of myoglobin, hemoglobin, its isolated subunits, mesoheme-substituted hemoglobin, and several deuteriated heme derivatives are reported. The observed oxygen isotopic shifts are used to assign the iron-oxygen stretching (approximately 570 cm-1) and the heretofore unobserved delta (Fe-O-O) bending (approximately 420 cm-1) modes. Although the delta (Fe-O-O) is not enhanced in the case of oxymyoglobin, it is observed for all the hemoglobin derivatives, its exact frequency being relatively invariable among the derivatives. The lack of sensitivity to H2O/D2O buffer exchange is consistent with our previous interpretation of H2O/D2O-induced shifts of v(O-O) in the resonance Raman spectra of dioxygen adducts of cobalt-substituted heme proteins; namely, that those shifts are associated with alterations in vibrational coupling of v(O-O) with internal modes of proximal histidyl imidazole rather than to steric or electronic effects of H/D exchange at the active site. No evidence is obtained for enhancement of the v(Fe-N) stretching frequency of the linkage between the heme iron and the imidazole group of the proximal histidine. PMID:7983043

  14. [Frequency and contribution of specific genetic loci transferred from wheat cultivar Mianmai 37 to its derivatives].

    PubMed

    Ren, Yong; Li, Shengrong; Luo, Jianming; He, Zhonghu; Du, Xiaoying; Zhou, Qiang; He, Yuanjiang; Wei, Yuming; Zheng, Youliang

    2014-02-01

    The development and utilization of outstanding germplasm in breeding programs can expedite breeding process. The high yielding variety Mianmai 37, grown widely in southwestern China, has been used widely in breeding programs. Comparisons between Mianmai 37 and its derivatives for yield and yield components were conducted. Simple sequence repeat (SSR) markers were used to test the frequency of specific alleles transferred from Mianmai 37 to its derivative culti-var Mianmai 367. The results indicated that the yield of the derivative cultivars was significantly higher than Mianmai 37, due to an increased grain number per spike. Favorable traits from Mianmai 37 such as resistance to stripe rust, were trans-ferred to its derivatives. At molecular level, 78.9% loci in Mianmai 367 were derived from Mianmai 37 with 75.0, 83.6 and 74.2% from A, B and D genomes, respectively. Mianmai 367 shared common loci with its parent Mianmai 37, such as re-gions Xgwm374-Xbarc167-Xbarc128-Xgwm129-Xgwm388-Xbarc101 on chromosome 2B and Xwmc446-Xwmc366- Xwmc533-Xbarc164-Xwmc418 on chromosome 3B, these regions were associated with grain number, 1000-kernel weight and resistance. The preferred transmission of alleles from Mianmai 37 to its derivatives probably can be explained by the strong selection pressures because of its favorable agronomic traits and the disease resistance. PMID:24846943

  15. Soil Moisture derivation from the multi-frequency sensor AMSR-2

    NASA Astrophysics Data System (ADS)

    Parinussa, Robert; de Nijs, Anne; de Jeu, Richard; Holmes, Thomas; Dorigo, Wouter; Wanders, Niko; Schellekens, Jaap

    2015-04-01

    We present a method to derive soil moisture from the multi-frequency sensor Advanced Microwave Scanning Radiometer 2 (AMSR-2). Its predecessor, the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E), has already provided Earth scientists with a consistent and continuous global soil moisture dataset. However, the AMSR-2 sensor has one big advantage in relation to the AMSR-E sensor; is has an additional channel in the C-band frequency (7.3 GHz). This channel creates the opportunity to have a better screening for Radio Frequency Interference (RFI) and could eventually lead to improved soil moisture retrievals. The soil moisture retrievals from AMSR-2 we present here use the Land Parameter Retrieval Model (LPRM) in combination with a new radio frequency interference masking method. We used observations of the multi-frequency microwave radiometer onboard the Tropical Rainfall Measuring Mission (TRMM) satellite to intercalibrate the brightness temperatures in order to improve consistency between AMSR-E and AMSR-2. Several scenarios to accomplish synergy between the AMSR-E and AMSR-2 soil moisture products were evaluated. A global comparison of soil moisture retrievals against ERA Interim re-analysis soil moisture demonstrates the need for an intercalibration procedure. Several different scenarios based on filtering were tested and the impact on the soil moisture retrievals was evaluated against two independent reference soil moisture datasets (reanalysis and in situ soil moisture) that cover the observation periods of the AMSR-E and AMSR-2 sensors. Results show a high degree of consistency between both satellite products and two independent reference products for the soil moisture products. In addition, the added value of an additional frequency for RFI detection is demonstrated within this study with a reduction of the total contaminated pixels in the 6.9 GHz of 66% for horizontal observations and even 85% for vertical observations when 7.3 and 10

  16. System for adjusting frequency of electrical output pulses derived from an oscillator

    DOEpatents

    Bartholomew, David B.

    2006-11-14

    A system for setting and adjusting a frequency of electrical output pulses derived from an oscillator in a network is disclosed. The system comprises an accumulator module configured to receive pulses from an oscillator and to output an accumulated value. An adjustor module is configured to store an adjustor value used to correct local oscillator drift. A digital adder adds values from the accumulator module to values stored in the adjustor module and outputs their sums to the accumulator module, where they are stored. The digital adder also outputs an electrical pulse to a logic module. The logic module is in electrical communication with the adjustor module and the network. The logic module may change the value stored in the adjustor module to compensate for local oscillator drift or change the frequency of output pulses. The logic module may also keep time and calculate drift.

  17. An efficient hybrid causative event-based approach for deriving the annual flood frequency distribution

    NASA Astrophysics Data System (ADS)

    Thyer, Mark; Li, Jing; Lambert, Martin; Kuczera, George; Metcalfe, Andrew

    2015-04-01

    Flood extremes are driven by highly variable and complex climatic and hydrological processes. Derived flood frequency methods are often used to predict the flood frequency distribution (FFD) because they can provide predictions in ungauged catchments and evaluate the impact of land-use or climate change. This study presents recent work on development of a new derived flood frequency method called the hybrid causative events (HCE) approach. The advantage of the HCE approach is that it combines the accuracy of the continuous simulation approach with the computational efficiency of the event-based approaches. Derived flood frequency methods, can be divided into two classes. Event-based approaches provide fast estimation, but can also lead to prediction bias due to limitations of inherent assumptions required for obtaining input information (rainfall and catchment wetness) for events that cause large floods. Continuous simulation produces more accurate predictions, however, at the cost of massive computational time. The HCE method uses a short continuous simulation to provide inputs for a rainfall-runoff model running in an event-based fashion. A proof-of-concept pilot study that the HCE produces estimates of the flood frequency distribution with similar accuracy as the continuous simulation, but with dramatically reduced computation time. Recent work incorporated seasonality into the HCE approach and evaluated with a more realistic set of eight sites from a wide range of climate zones, typical of Australia, using a virtual catchment approach. The seasonal hybrid-CE provided accurate predictions of the FFD for all sites. Comparison with the existing non-seasonal hybrid-CE showed that for some sites the non-seasonal hybrid-CE significantly over-predicted the FFD. Analysis of the underlying cause of whether a site had a high, low or no need to use seasonality found it was based on a combination of reasons, that were difficult to predict apriori. Hence it is recommended

  18. Frequency and origins of hemoglobin S mutation in African-derived Brazilian populations.

    PubMed

    De Mello Auricchio, Maria Teresa Balester; Vicente, João Pedro; Meyer, Diogo; Mingroni-Netto, Regina Célia

    2007-12-01

    Africans arrived in Brazil as slaves in great numbers, mainly after 1550. Before the abolition of slavery in Brazil in 1888, many communities, called quilombos, were formed by runaway or abandoned African slaves. These communities are presently referred to as remnants of quilombos, and many are still partially genetically isolated. These remnants can be regarded as relicts of the original African genetic contribution to the Brazilian population. In this study we assessed frequencies and probable geographic origins of hemoglobin S (HBB*S) mutations in remnants of quilombo populations in the Ribeira River valley, São Paulo, Brazil, to reconstruct the history of African-derived populations in the region. We screened for HBB*S mutations in 11 quilombo populations (1,058 samples) and found HBB*S carrier frequencies that ranged from 0% to 14%. We analyzed beta-globin gene cluster haplotypes linked to the HBB*S mutation in 86 chromosomes and found the four known African haplotypes: 70 (81.4%) Bantu (Central Africa Republic), 7 (8.1%) Benin, 7 (8.1%) Senegal, and 2 (2.3%) Cameroon haplotypes. One sickle cell homozygote was Bantu/Bantu and two homozygotes had Bantu/Benin combinations. The high frequency of the sickle cell trait and the diversity of HBB*S linked haplotypes indicate that Brazilian remnants of quilombos are interesting repositories of genetic diversity present in the ancestral African populations. PMID:18494376

  19. Topographic gravitational potential up to second-order derivatives: an examination of approximation errors caused by rock-equivalent topography (RET)

    NASA Astrophysics Data System (ADS)

    Kuhn, Michael; Hirt, Christian

    2016-05-01

    In gravity forward modelling, the concept of Rock-Equivalent Topography (RET) is often used to simplify the computation of gravity implied by rock, water, ice and other topographic masses. In the RET concept, topographic masses are compressed (approximated) into equivalent rock, allowing the use of a single constant mass-density value. Many studies acknowledge the approximate character of the RET, but few have attempted yet to quantify and analyse the approximation errors in detail for various gravity field functionals and heights of computation points. Here, we provide an in-depth examination of approximation errors associated with the RET compression for the topographic gravitational potential and its first- and second-order derivatives. Using the Earth2014 layered topography suite we apply Newtonian integration in the spatial domain in the variants (a) rigorous forward modelling of all mass bodies, (b) approximative modelling using RET. The differences among both variants, which reflect the RET approximation error, are formed and studied for an ensemble of 10 different gravity field functionals at three levels of altitude (on and 3 km above the Earth's surface and at 250 km satellite height). The approximation errors are found to be largest at the Earth's surface over RET compression areas (oceans, ice shields) and to increase for the first- and second-order derivatives. Relative errors, computed here as ratio between the range of differences between both variants relative to the range in signal, are at the level of 0.06-0.08 % for the potential, ˜ 3-7 % for the first-order derivatives at the Earth's surface (˜ 0.1 % at satellite altitude). For the second-order derivatives, relative errors are below 1 % at satellite altitude, at the 10-20 % level at 3 km and reach maximum values as large as ˜ 20 to 110 % near the surface. As such, the RET approximation errors may be acceptable for functionals computed far away from the Earth's surface or studies focussing on

  20. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  1. Induction of chondrogenic differentiation of human adipose-derived stem cells by low frequency electric field

    PubMed Central

    Mardani, Mohammad; Roshankhah, Shiva; Hashemibeni, Batool; Salahshoor, Mohammadreza; Naghsh, Erfan; Esfandiari, Ebrahim

    2016-01-01

    Background: Since when the cartilage damage (e.g., with the osteoarthritis) it could not be repaired in the body, hence for its reconstruction needs cell therapy. For this purpose, adipose-derived stem cells (ADSCs) is one of the best cell sources because by the tissue engineering techniques it can be differentiated into chondrocytes. Chemical and physical inducers is required order to stem cells to chondrocytes differentiating. We have decided to define the role of electric field (EF) in inducing chondrogenesis process. Materials and Methods: A low frequency EF applied the ADSCs as a physical inducer for chondrogenesis in a 3D micromass culture system which ADSCs were extracted from subcutaneous abdominal adipose tissue. Also enzyme-linked immunosorbent assay, methyl thiazolyl tetrazolium, real time polymerase chain reaction and flowcytometry techniques were used for this study. Results: We found that the 20 minutes application of 1 kHz, 20 mv/cm EF leads to chondrogenesis in ADSCs. Although our results suggest that application of physical (EF) and chemical (transforming growth factor-β3) inducers at the same time, have best results in expression of collagen type II and SOX9 genes. It is also seen EF makes significant decreased expression of collagens type I and X genes. Conclusion: The low frequency EF can be a good motivator to promote chondrogenic differentiation of human ADSCs. PMID:27308269

  2. Multi-frequency acoustic derivation of particle size using 'off-the-shelf" ADCPs.

    NASA Astrophysics Data System (ADS)

    Haught, D. R.; Wright, S. A.; Venditti, J. G.; Church, M. A.

    2015-12-01

    Suspended sediment particle size in rivers is of great interest due to its influence on riverine and coastal morphology, socio-economic viability, and ecological health and restoration. Prediction of suspended sediment transport from hydraulics remains a stubbornly difficult problem, particularly for the washload component, which is controlled by sediment supply from the drainage basin. This has led to a number of methods for continuously monitoring suspended sediment concentration and mean particle size, the most popular currently being hydroacoustic methods. Here, we explore the possibility of using theoretical inversion of the sonar equation to derive an estimate of mean particle size and standard deviation of the grain size distribution (GSD) using three 'off-the-shelf' acoustic Doppler current profiles (ADCP) with frequencies of 300, 600 and 1200 kHz. The instruments were deployed in the sand-bedded reach of the Fraser River, British Columbia. We use bottle samples collected in the acoustic beams to test acoustics signal inversion methods. Concentrations range from 15-300 mg/L and the suspended load at the site is ~25% sand, ~75 % silt/clay. Measured mean particle radius from samples ranged from 10-40 microns with relative standard deviations ranging from 0.75 to 2.5. Initial results indicate the acoustically derived mean particle radius compares well with measured particle radius, using a theoretical inversion method adapted to the Fraser River sediment.

  3. Suppressing gate errors through extra ions coupled to a cavity in frequency-domain quantum computation using rare-earth-ion-doped crystal

    NASA Astrophysics Data System (ADS)

    Nakamura, Satoshi; Goto, Hayato; Kujiraoka, Mamiko; Ichimura, Kouichi; Quantum Computer Team

    The rare-earth-ion-doped crystals, such as Pr3+: Y2SiO5, are promising materials for scalable quantum computers, because the crystals contain a large number of ions which have long coherence time. The frequency-domain quantum computation (FDQC) enables us to employ individual ions coupled to a common cavity mode as qubits by identifying with their transition frequencies. In the FDQC, operation lights with detuning interact with transitions which are not intended to operate, because ions are irradiated regardless of their positions. This crosstalk causes serious errors of the quantum gates in the FDQC. When ``resonance conditions'' between eigenenergies of the whole system and transition-frequency differences among ions are satisfied, the gate errors increase. Ions for qubits must have transitions avoiding the conditions for high-fidelity gate. However, when a large number of ions are employed as qubits, it is difficult to avoid the conditions because of many combinations of eigenenergies and transitions. We propose new implementation using extra ions to control the resonance conditions, and show the effect of the extra ions by a numerical simulation. Our implementation is useful to realize a scalable quantum computer using rare-earth-ion-doped crystal based on the FDQC.

  4. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  5. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations.

    PubMed

    Seoane, Fernando; Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar; Ward, Leigh C

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  6. Classification of radiological errors in chest radiographs, using support vector machine on the spatial frequency features of false- negative and false-positive regions

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.

    2011-03-01

    Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.

  7. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    PubMed Central

    Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  8. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    PubMed

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  9. Implications of the shape of design hyetograph in the derived flood frequency distribution

    NASA Astrophysics Data System (ADS)

    Sordo-Ward, A.; Bianucci, P.; Garrote, L.

    2012-04-01

    Hydrometeorological methods for rainfall-runoff transformation are frequently used when the hydrological design of hydraulic infrastructures is considered. These methods imply to determinate the design storm which is usually characterised by the return period of its total depth of precipitation. In the other hand, the shape of the hyetograph, i.e. the temporal pattern of the storm, has a relevant implication in the resulting hydrograph. In this work we analysed the influence that the within-storm rainfall intensity distribution has on the derived flood frequency (DFF) law. This was addressed by comparing the DFF's obtained from two different ensembles of hyetographs with the same total depth frequency distribution and constant total duration. One ensemble of hyetograph (BA) was determined using the alternating blocks method which is usually assumed to provide more adverse hydrological load. The second ensemble (MC) was obtained using a stochastic storm generator developed in a Monte Carlo framework. The ratios between corresponding maximum flows were calculated for selected return periods (RP) as a measure of the difference between both DFF's. The variation of this quotient was analysed regarding the return period and basin configuration. We considered three different discretization scales for the 1241-km2 Manzanares River basin with outlet near Rivas-Vaciamadrid, in the Region of Madrid (Spain). The three levels correspond to high resolution (HR, basin divided into 62 sub-catchments), medium resolution (MR, 33 sub-catchments), and low resolution (LR, 14 sub-catchments). For the case studied, and for the three configuration considered, the DFF obtained from the alternating blocks hyetograph was not such adverse as it was expected to be. The flow peak ratio kept practically constant across the RP range. While the BA-quantiles for each subbasin's DFF were higher than MC-quantiles in a 10% to 40%; the peak flow ratios at the catchment outlet took values close to one

  10. The effect of verb semantic class and verb frequency (entrenchment) on children's and adults' graded judgements of argument-structure overgeneralization errors.

    PubMed

    Ambridge, Ben; Pine, Julian M; Rowland, Caroline F; Young, Chris R

    2008-01-01

    Participants (aged 5-6 yrs, 9-10 yrs and adults) rated (using a five-point scale) grammatical (intransitive) and overgeneralized (transitive causative)(1) uses of a high frequency, low frequency and novel intransitive verb from each of three semantic classes [Pinker, S. (1989a). Learnability and cognition: The acquisition of argument structure. Cambridge, MA: MIT Press]: "directed motion" (fall, tumble), "going out of existence" (disappear, vanish) and "semivoluntary expression of emotion" (laugh, giggle). In support of Pinker's semantic verb class hypothesis, participants' preference for grammatical over overgeneralized uses of novel (and English) verbs increased between 5-6 yrs and 9-10 yrs, and was greatest for the latter class, which is associated with the lowest degree of direct external causation (the prototypical meaning of the transitive causative construction). In support of Braine and Brooks's [Braine, M.D.S., & Brooks, P.J. (1995). Verb argument strucure and the problem of avoiding an overgeneral grammar. In M. Tomasello & W. E. Merriman (Eds.), Beyond names for things: Young children's acquisition of verbs (pp. 352-376). Hillsdale, NJ: Erlbaum] entrenchment hypothesis, all participants showed the greatest preference for grammatical over ungrammatical uses of high frequency verbs, with this preference smaller for low frequency verbs, and smaller again for novel verbs. We conclude that both the formation of semantic verb classes and entrenchment play a role in children's retreat from argument-structure overgeneralization errors. PMID:17316595

  11. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  12. Sampling errors for satellite-derived tropical rainfall: Monte Carlo study using a space-time stochastic model

    SciTech Connect

    Bell, T.L. ); Abdullah, A.; Martin, R.L. ); North, G.R. )

    1990-02-28

    Estimates of monthly average rainfall based on satellite observations from a low Earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The authors estimate the size of this error for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). They first examine in detail the statistical description of rainfall on scales from 1 to 10{sup 3} km, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10% of the mean for rainfall averaged over a 500 {times} 500 km{sup 2} area.

  13. An adaptive strategy on the error of the objective functions for uncertainty-based derivative-free optimization

    NASA Astrophysics Data System (ADS)

    Fusi, F.; Congedo, P. M.

    2016-03-01

    In this work, a strategy is developed to deal with the error affecting the objective functions in uncertainty-based optimization. We refer to the problems where the objective functions are the statistics of a quantity of interest computed by an uncertainty quantification technique that propagates some uncertainties of the input variables through the system under consideration. In real problems, the statistics are computed by a numerical method and therefore they are affected by a certain level of error, depending on the chosen accuracy. The errors on the objective function can be interpreted with the abstraction of a bounding box around the nominal estimation in the objective functions space. In addition, in some cases the uncertainty quantification methods providing the objective functions also supply the possibility of adaptive refinement to reduce the error bounding box. The novel method relies on the exchange of information between the outer loop based on the optimization algorithm and the inner uncertainty quantification loop. In particular, in the inner uncertainty quantification loop, a control is performed to decide whether a refinement of the bounding box for the current design is appropriate or not. In single-objective problems, the current bounding box is compared to the current optimal design. In multi-objective problems, the decision is based on the comparison of the error bounding box of the current design and the current Pareto front. With this strategy, fewer computations are made for clearly dominated solutions and an accurate estimate of the objective function is provided for the interesting, non-dominated solutions. The results presented in this work prove that the proposed method improves the efficiency of the global loop, while preserving the accuracy of the final Pareto front.

  14. Fluctuating neutron star magnetosphere: braking indices of eight pulsars, frequency second derivatives of 222 pulsars and 15 magnetars

    NASA Astrophysics Data System (ADS)

    Ou, Z. W.; Tong, H.; Kou, F. F.; Ding, G. Q.

    2016-04-01

    Eight pulsars have low braking indices, which challenge the magnetic dipole braking of pulsars. 222 pulsars and 15 magnetars have abnormal distribution of frequency second derivatives, which also make contradiction with classical understanding. How neutron star magnetospheric activities affect these two phenomena are investigated by using the wind braking model of pulsars. It is based on the observational evidence that pulsar timing is correlated with emission and both aspects reflect the magnetospheric activities. Fluctuations are unavoidable for a physical neutron star magnetosphere. Young pulsars have meaningful braking indices, while old pulsars' and magnetars' fluctuation item dominates their frequency second derivatives. It can explain both the braking index and frequency second derivative of pulsars uniformly. The braking indices of eight pulsars are the combined effect of magnetic dipole radiation and particle wind. During the lifetime of a pulsar, its braking index will evolve from three to one. Pulsars with low braking index may put strong constraint on the particle acceleration process in the neutron star magnetosphere. The effect of pulsar death should be considered during the long term rotational evolution of pulsars. An equation like the Langevin equation for Brownian motion was derived for pulsar spin-down. The fluctuation in the neutron star magnetosphere can be either periodic or random, which result in anomalous frequency second derivative and they have similar results. The magnetospheric activities of magnetars are always stronger than those of normal pulsars.

  15. Low-Frequency Tropical Pacific Sea-Surface Temperature over the Past Millennium: Reconstruction and Error Estimates

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Cobb, K.; Mann, M. E.; Rutherford, S. D.; Wittenberg, A. T.

    2009-12-01

    Since surface conditions over the tropical Pacific can organize climate variability at near-global scales, and since there is wide disagreement over their projected course under greenhouse forcing, it is of considerable interest to reconstruct their low-frequency evolution over the past millennium. To this end, we make use of the hybrid RegEM climate reconstruction technique (Mann et al. 2008; Schneider 2001), which aims to reconstruct decadal and longer-scale variations of sea-surface temperature (SST) from an array of climate proxies. We first assemble a database of published and new, high-resolution proxy data from ENSO-sensitive regions, screened for significant correlation with a common ENSO metric (NINO3 index). Proxy observations come primarily from coral, speleothem, marine and lake sediment, and ice core sources, as well as long tree-ring chronologies. The hybrid RegEM methodology is then validated within a pseudoproxy context using two coupled general circulation model simulations of the past millennium’s climate; one using the NCAR CSM1.4, the other the GFDL CM2.1, models (Ammann et al. 2007; Wittenberg 2009). Validation results are found to be sensitive to the ratio of interannual to lower-frequency variability, with poor reconstruction skill for CM2.1 but good skill for CSM1.4. The latter features prominent changes in NINO3 at decadal-to-centennial timescales, which the network and method detect relatively easily. In contrast, the unforced CM2.1 NINO3 is dominated by interannual variations, and its long-term oscillations are more difficult to reconstruct. These two limit cases bracket the observed NINO3 behavior over the historical period. We then apply the method to the proxy observations and extend the decadal-scale history of tropical Pacific SSTs over the past millennium, analyzing the sensitivity of such reconstruction to the inclusion of various key proxy timeseries and details of the statistical analysis, emphasizing metrics of uncertainty

  16. Large Scale Parameter Estimation Problems in Frequency-Domain Elastodynamics Using an Error in Constitutive Equation Functional

    PubMed Central

    Banerjee, Biswanath; Walsh, Timothy F.; Aquino, Wilkins; Bonnet, Marc

    2012-01-01

    This paper presents the formulation and implementation of an Error in Constitutive Equations (ECE) method suitable for large-scale inverse identification of linear elastic material properties in the context of steady-state elastodynamics. In ECE-based methods, the inverse problem is postulated as an optimization problem in which the cost functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses. Furthermore, in a more recent modality of this methodology introduced by Feissel and Allix (2007), referred to as the Modified ECE (MECE), the measured data is incorporated into the formulation as a quadratic penalty term. We show that a simple and efficient continuation scheme for the penalty term, suggested by the theory of quadratic penalty methods, can significantly accelerate the convergence of the MECE algorithm. Furthermore, a (block) successive over-relaxation (SOR) technique is introduced, enabling the use of existing parallel finite element codes with minimal modification to solve the coupled system of equations that arises from the optimality conditions in MECE methods. Our numerical results demonstrate that the proposed methodology can successfully reconstruct the spatial distribution of elastic material parameters from partial and noisy measurements in as few as ten iterations in a 2D example and fifty in a 3D example. We show (through numerical experiments) that the proposed continuation scheme can improve the rate of convergence of MECE methods by at least an order of magnitude versus the alternative of using a fixed penalty parameter. Furthermore, the proposed block SOR strategy coupled with existing parallel solvers produces a computationally efficient MECE method that can be used for large scale materials identification problems, as demonstrated on a 3D example involving about 400,000 unknown moduli. Finally, our numerical results suggest that the proposed MECE

  17. The effect of spatial truncation error on variance of gravity anomalies derived from inversion of satellite orbital and gradiometric data

    NASA Astrophysics Data System (ADS)

    Eshagh, Mehdi; Ghorbannia, Morteza

    2014-07-01

    The spatial truncation error (STE) is a significant systematic error in the integral inversion of satellite gradiometric and orbital data to gravity anomalies at sea level. In order to reduce the effect of STE, a larger area than the desired one is considered in the inversion process, but the anomalies located in its central part are selected as the final results. The STE influences the variance of the results as well because the residual vector, which is contaminated with STE, is used for its estimation. The situation is even more complicated in variance component estimation because of its iterative nature. In this paper, we present a strategy to reduce the effect of STE on the a posteriori variance factor and the variance components for inversion of satellite orbital and gradiometric data to gravity anomalies at sea level. The idea is to define two windowing matrices for reducing this error from the estimated residuals and anomalies. Our simulation studies over Fennoscandia show that the differences between the 0.5°×0.5° gravity anomalies obtained from orbital data and an existing gravity model have standard deviation (STD) and root mean squared error (RMSE) of 10.9 and 12.1 mGal, respectively, and those obtained from gradiometric data have 7.9 and 10.1 in the same units. In the case that they are combined using windowed variance components the STD and RMSE become 6.1 and 8.4 mGal. Also, the mean value of the estimated RMSE after using the windowed variances is in agreement with the RMSE of the differences between the estimated anomalies and those obtained from the gravity model.

  18. Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles

    PubMed Central

    Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.

    2009-01-01

    In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956

  19. A Derivation of the Long-Term Degradation of a Pulsed Atomic Frequency Standard from a Control-Loop Model

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1996-01-01

    The phase of a frequency standard that uses periodic interrogation and control of a local oscillator (LO) is degraded by a long-term random-walk component induced by downconversion of LO noise into the loop passband. The Dick formula for the noise level of this degradation is derived from an explicit solution of an LO control-loop model.

  20. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  1. The Importance of Measurement Errors for Deriving Accurate Reference Leaf Area Index Maps for Validation of Moderate-Resolution Satellite LAI Products

    NASA Technical Reports Server (NTRS)

    Huang, Dong; Yang, Wenze; Tan, Bin; Rautiainen, Miina; Zhang, Ping; Hu, Jiannan; Shabanov, Nikolay V.; Linder, Sune; Knyazikhin, Yuri; Myneni, Ranga B.

    2006-01-01

    The validation of moderate-resolution satellite leaf area index (LAI) products such as those operationally generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data requires reference LAI maps developed from field LAI measurements and fine-resolution satellite data. Errors in field measurements and satellite data determine the accuracy of the reference LAI maps. This paper describes a method by which reference maps of known accuracy can be generated with knowledge of errors in fine-resolution satellite data. The method is demonstrated with data from an international field campaign in a boreal coniferous forest in northern Sweden, and Enhanced Thematic Mapper Plus images. The reference LAI map thus generated is used to assess modifications to the MODIS LAI/fPAR algorithm recently implemented to derive the next generation of the MODIS LAI/fPAR product for this important biome type.

  2. SAR image quality effects of damped phase and amplitude errors

    NASA Astrophysics Data System (ADS)

    Zelenka, Jerry S.; Falk, Thomas

    The effects of damped multiplicative, amplitude, or phase errors on the image quality of synthetic-aperture radar systems are considered. These types of errors can result from aircraft maneuvers or the mechanical steering of an antenna. The proper treatment of damped multiplicative errors can lead to related design specifications and possibly an enhanced collection capability. Only small, high-frequency errors are considered. Expressions for the average intensity and energy associated with a damped multiplicative error are presented and used to derive graphic results. A typical example is used to show how to apply the results of this effort.

  3. Reducing epistemic errors in water quality modelling through high-frequency data and stakeholder collaboration: the case of an industrial spill

    NASA Astrophysics Data System (ADS)

    Krueger, Tobias; Inman, Alex; Paling, Nick

    2014-05-01

    Catchment management, as driven by legislation such as the EU WFD or grassroots initiatives, requires the apportionment of in-stream pollution to point and diffuse sources so that mitigation measures can be targeted and costs and benefits shared. Source apportionment is typically done via modelling. Given model imperfections and input data errors, it has become state-of-the-art to employ an uncertainty framework. However, what is not easily incorporated in such a framework, and currently much discussed in hydrology, are epistemic uncertainties, i.e. those uncertainties that relate to lack of knowledge about processes and data. For example, what if an otherwise negligible source suddenly matters because of an accidental pollution incident? In this paper we present such a case of epistemic error, an industrial spill ignored in a water quality model, demonstrate the bias of the resulting model simulations, and show how the error was discovered somewhat incidentally by auxiliary high-frequency data and finally corrected through the collective intelligence of a stakeholder network. We suggest that accidental pollution incidents like this are a wide-spread, though largely ignored, problem. Hence our discussion will reflect on the practice of catchment monitoring, modelling and management in general. The case itself occurred as part of ongoing modelling support in the Tamar catchment, one of the priority catchments of the UK government's new approach to managing water resources more decentralised and collaboratively. An Extended Export Coefficient Model (ECM+) had been developed with stakeholders to simulate transfers of nutrients (N & P), sediment and Faecal Coliforms from land to water and down the river network as a function of sewage treatment options, land use, livestock densities and farm management practices. In the process of updating the model for the hydrological years 2008-2012 an over-prediction of the annual average P concentration by the model was found at

  4. Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors

    NASA Technical Reports Server (NTRS)

    Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

    2007-01-01

    Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross

  5. An analysis of the effects of secondary reflections on dual-frequency reflectometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.; Cockrell, C. R.; Harrah, S. D.

    1990-01-01

    The error-producing mechanism involving secondary reflections in a dual-frequency, distance measuring reflectometer is examined analytically. Equations defining the phase, and hence distance, error are derived. The error-reducing potential of frequency-sweeping is demonstrated. It is shown that a single spurious return can be completely nullified by optimizing the sweep width.

  6. Standard Errors for Matrix Correlations.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  7. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  8. Design and analysis of vector color error diffusion halftoning systems.

    PubMed

    Damera-Venkata, N; Evans, B L

    2001-01-01

    Traditional error diffusion halftoning is a high quality method for producing binary images from digital grayscale images. Error diffusion shapes the quantization noise power into the high frequency regions where the human eye is the least sensitive. Error diffusion may be extended to color images by using error filters with matrix-valued coefficients to take into account the correlation among color planes. For vector color error diffusion, we propose three contributions. First, we analyze vector color error diffusion based on a new matrix gain model for the quantizer, which linearizes vector error diffusion. The model predicts the key characteristics of color error diffusion, esp. image sharpening and noise shaping. The proposed model includes linear gain models for the quantizer by Ardalan and Paulos (1987) and by Kite et al. (1997) as special cases. Second, based on our model, we optimize the noise shaping behavior of color error diffusion by designing error filters that are optimum with respect to any given linear spatially-invariant model of the human visual system. Our approach allows the error filter to have matrix-valued coefficients and diffuse quantization error across color channels in an opponent color representation. Thus, the noise is shaped into frequency regions of reduced human color sensitivity. To obtain the optimal filter, we derive a matrix version of the Yule-Walker equations which we solve by using a gradient descent algorithm. Finally, we show that the vector error filter has a parallel implementation as a polyphase filterbank. PMID:18255498

  9. Error prone translesion synthesis past gamma-hydroxypropano deoxyguanosine, the primary acrolein-derived adduct in mammalian cells.

    PubMed

    Kanuri, Manorama; Minko, Irina G; Nechev, Lubomir V; Harris, Thomas M; Harris, Constance M; Lloyd, R Stephen

    2002-05-24

    8-Hydroxy-5,6,7,8-tetrahydropyrimido[1,2-a]purin- 10(3H)-one,3-(2'-deoxyriboside) (1,N(2)-gamma-hydroxypropano deoxyguanosine, gamma-HOPdG) is a major DNA adduct that forms as a result of exposure to acrolein, an environmental pollutant and a product of endogenous lipid peroxidation. gamma-HOPdG has been shown previously not to be a miscoding lesion when replicated in Escherichia coli. In contrast to those prokaryotic studies, in vivo replication and mutagenesis assays in COS-7 cells using single stranded DNA containing a specific gamma-HOPdG adduct, revealed that the gamma-HOPdG adduct was significantly mutagenic. Analyses revealed both transversion and transition types of mutations at an overall mutagenic frequency of 7.4 x 10(-2)/translesion synthesis. In vitro gamma-HOPdG strongly blocks DNA synthesis by two major polymerases, pol delta and pol epsilon. Replicative blockage of pol delta by gamma-HOPdG could be diminished by the addition of proliferating cell nuclear antigen, leading to highly mutagenic translesion bypass across this adduct. The differential functioning and processing capacities of the mammalian polymerases may be responsible for the higher mutation frequencies observed in this study when compared with the accurate and efficient nonmutagenic bypass observed in the bacterial system. PMID:11889127

  10. Design and analysis of tilt integral derivative controller with filter for load frequency control of multi-area interconnected power systems.

    PubMed

    Kumar Sahu, Rabindra; Panda, Sidhartha; Biswal, Ashutosh; Chandra Sekhar, G T

    2016-03-01

    In this paper, a novel Tilt Integral Derivative controller with Filter (TIDF) is proposed for Load Frequency Control (LFC) of multi-area power systems. Initially, a two-area power system is considered and the parameters of the TIDF controller are optimized using Differential Evolution (DE) algorithm employing an Integral of Time multiplied Absolute Error (ITAE) criterion. The superiority of the proposed approach is demonstrated by comparing the results with some recently published heuristic approaches such as Firefly Algorithm (FA), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) optimized PID controllers for the same interconnected power system. Investigations reveal that proposed TIDF controllers provide better dynamic response compared to PID controller in terms of minimum undershoots and settling times of frequency as well as tie-line power deviations following a disturbance. The proposed approach is also extended to two widely used three area test systems considering nonlinearities such as Generation Rate Constraint (GRC) and Governor Dead Band (GDB). To improve the performance of the system, a Thyristor Controlled Series Compensator (TCSC) is also considered and the performance of TIDF controller in presence of TCSC is investigated. It is observed that system performance improves with the inclusion of TCSC. Finally, sensitivity analysis is carried out to test the robustness of the proposed controller by varying the system parameters, operating condition and load pattern. It is observed that the proposed controllers are robust and perform satisfactorily with variations in operating condition, system parameters and load pattern. PMID:26712682

  11. Low frequency internal vibrations of norbornane and its derivatives studied by IINS and quantum chemistry calculations

    SciTech Connect

    Holderna-Natkaniec, K.; Natkaniec, I.; Khavryutchenko, V. D.

    1999-06-15

    The observed and calculated INS vibrational densities of states for globular molecules of norbornane, norborneole and borneole are compared in the frequency range up to 600 cm{sup -1}. Inelastic incoherent neutron scattering (IINS) spectra were measured at ca. 20 K on the high resolution NERA spectrometer at the IBR-2 pulsed reactor. The IINS intensities were calculated by semi-empirical quantum chemistry method and the assignments of the low-frequency internal modes were proposed.

  12. Anomalous Appearance of ν[C(2)=C(3)] Frequencies in IR Spectra of 1,4-Naphthoquinone Hydroxy Derivatives

    NASA Astrophysics Data System (ADS)

    Glazunov, V. P.; Berdyshev, D. V.

    2014-09-01

    Absorption bands in the carbonyl range 1750-1500 cm-1 of the IR spectrum of 2,3-dihydroxy-1,4-naphthoquinone and some of its derivatives were assigned based on calculations of normal mode frequencies using the B3LYP/cc-pVTZ method for isolated molecules and the polarized continuum model taking into account the influence of weakly and moderately polar solvents (CCl4, CDCl3, and CH2Cl2). It was shown that the frequency of the quinone C(2)=C(3) stretching vibration for 2,3-OH- and 2,5,8-OH-1,4-naphthoquinones (2-OH-naphthazarins) was 50-60 cm-1 higher than that of the carbonyl stretching vibration. The frequency difference reached 100 cm-1 for 2,3,5,8-OH-1,4-naphthoquinones (2,3-OH-naphthazarins).

  13. Applying GOES-derived fog frequency indices to water balance modeling for the Russian River Watershed, California

    NASA Astrophysics Data System (ADS)

    Torregrosa, A.; Flint, L. E.; Flint, A. L.; Peters, J.; Combs, C.

    2014-12-01

    Coastal fog modifies the hydrodynamic and thermodynamic properties of California watersheds with the greatest impact to ecosystem functioning during arid summer months. Lowered maximum temperatures resulting from inland penetration of marine fog are probably adequate to capture fog effects on thermal land surface characteristics however the hydrologic impact from lowered rates of evapotranspiration due to shade, fog drip, increased relative humidity, and other factors associated with fog events are more difficult to gauge. Fog products, such as those derived from National Weather Service Geostationary Operational Environmental Satellite (GOES) imagery, provide high frequency (up to 15 min) views of fog and low cloud cover and can potentially improve water balance models. Even slight improvements in water balance calculations can benefit urban water managers and agricultural irrigation. The high frequency of GOES output provides the opportunity to explore options for integrating fog frequency data into water balance models. This pilot project compares GOES-derived fog frequency intervals (6, 12 and 24 hour) to explore the most useful for water balance models and to develop model-relevant relationships between climatic and water balance variables. Seasonal diurnal thermal differences, plant ecophysiological processes, and phenology suggest that a day/night differentiation on a monthly basis may be adequate. To explore this hypothesis, we examined discharge data from stream gages and outputs from the USGS Basin Characterization Model for runoff, recharge, potential evapotranspiration, and actual evapotranspiration for the Russian River Watershed under low, medium, and high fog event conditions derived from hourly GOES imagery (1999-2009). We also differentiated fog events into daytime and nighttime versus a 24-hour compilation on a daily, monthly, and seasonal basis. Our data suggest that a daily time-step is required to adequately incorporate the hydrologic effect of

  14. Retrieval error estimation of surface albedo derived from geostationary large band satellite observations: Application to Meteosat-2 and Meteosat-7 data

    NASA Astrophysics Data System (ADS)

    Govaerts, Y. M.; Lattanzio, A.

    2007-03-01

    The extraction of critical geophysical variables from multidecade archived satellite observations, such as those acquired by the European Meteosat First Generation satellite series, for the generation of climate data records is recognized as a pressing challenge by international environmental organizations. This paper presents a statistical method for the estimation of the surface albedo retrieval error that explicitly accounts for the measurement uncertainties and differences in the Meteosat radiometer characteristics. The benefit of this approach is illustrated with a simple case study consisting of a meaningful comparison of surface albedo derived from observations acquired at a 20 year interval by sensors with different radiometric performances. In particular, it is shown how it is possible to assess the magnitude of minimum detectable significant surface albedo change.

  15. Combining corpus-derived sense profiles with estimated frequency information to disambiguate clinical abbreviations.

    PubMed

    Xu, Hua; Stetson, Peter D; Friedman, Carol

    2012-01-01

    Abbreviations are widely used in clinical notes and are often ambiguous. Word sense disambiguation (WSD) for clinical abbreviations therefore is a critical task for many clinical natural language processing (NLP) systems. Supervised machine learning based WSD methods are known for their high performance. However, it is time consuming and costly to construct annotated samples for supervised WSD approaches and sense frequency information is often ignored by these methods. In this study, we proposed a profile-based method that used dictated discharge summaries as an external source to automatically build sense profiles and applied them to disambiguate abbreviations in hospital admission notes via the vector space model. Our evaluation using a test set containing 2,386 annotated instances from 13 ambiguous abbreviations in admission notes showed that the profile-based method performed better than two baseline methods and achieved a best average precision of 0.792. Furthermore, we developed a strategy to combine sense frequency information estimated from a clustering analysis with the profile-based method. Our results showed that the combined approach largely improved the performance and achieved a highest precision of 0.875 on the same test set, indicating that integrating sense frequency information with local context is effective for clinical abbreviation disambiguation. PMID:23304376

  16. Protein interaction hotspot identification using sequence-based frequency-derived features.

    PubMed

    Nguyen, Quang-Thang; Fablet, Ronan; Pastor, Dominique

    2013-11-01

    Finding good descriptors, capable of discriminating hotspot residues from others, is still a challenge in many attempts to understand protein interaction. In this paper, descriptors issued from the analysis of amino acid sequences using digital signal processing (DSP) techniques are shown to be as good as those derived from protein tertiary structure and/or information on the complex. The simulation results show that our descriptors can be used separately to predict hotspots, via a random forest classifier, with an accuracy of 79% and a precision of 75%. They can also be used jointly with features derived from tertiary structures to boost the performance up to an accuracy of 82% and a precision of 80%. PMID:21742567

  17. CORRELATED ERRORS IN EARTH POINTING MISSIONS

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve; Patt, Frederick S.

    2005-01-01

    Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor

  18. Electromagnetic scattering by a triaxial homogeneous penetrable ellipsoid: Low-frequency derivation and testing of the localized nonlinear approximation

    NASA Astrophysics Data System (ADS)

    Perrusson, G.; Lambert, M.; Lesselier, D.; Charalambopoulos, A.; Dassios, G.

    2000-03-01

    The field resulting from the illumination by a localized time-harmonic low-frequency source (typically a magnetic dipole) of a voluminous lossy dielectric body placed in a lossy dielectric embedding is determined within the framework of the localized nonlinear approximation by means of a low-frequency Rayleigh analysis. It is sketched (1) how one derives a low-frequency series expansion in positive integral powers of (jk), where k is the embedding complex wavenumber, of the depolarization dyad that relates the background electric field to the total electric field inside the body; (2) how this expansion is used to determine the magnetic field resulting outside the body and how the corresponding series expansion of this field, up to the power 5 in (jk), follows once the series expansion of the incident electric field in the body volume is known up to the same power; and (3) how the needed nonzero coefficients of the depolarization dyad (up to the power 3 in (jk)) are obtained, for a general triaxial ellipsoid and after careful reduction for the geometrically degenerate geometries, with the help of the elliptical harmonic theory. Numerical results obtained by this hybrid low-frequency approach illustrate its capability to provide accurate magnetic fields at low computational cost, in particular, in comparison with a general purpose method-of-moments code.

  19. A genome signature derived from the interplay of word frequencies and symbol correlations

    NASA Astrophysics Data System (ADS)

    Möller, Simon; Hameister, Heike; Hütt, Marc-Thorsten

    2014-11-01

    Genome signatures are statistical properties of DNA sequences that provide information on the underlying species. It is not understood, how such species-discriminating statistical properties arise from processes of genome evolution and from functional properties of the DNA. Investigating the interplay of different genome signatures can contribute to this understanding. Here we analyze the statistical dependences of two such genome signatures: word frequencies and symbol correlations at short and intermediate distances. We formulate a statistical model of word frequencies in DNA sequences based on the observed symbol correlations and show that deviations of word counts from this correlation-based null model serve as a new genome signature. This signature (i) performs better in sorting DNA sequence segments according to their species origin and (ii) reveals unexpected species differences in the composition of microsatellites, an important class of repetitive DNA. While the first observation is a typical task in metagenomics projects and therefore an important benchmark for a genome signature, the latter suggests strong species differences in the biological mechanisms of genome evolution. On a more general level, our results highlight that the choice of null model (here: word abundances computed via symbol correlations rather than shorter word counts) substantially affects the interpretation of such statistical signals.

  20. Determination of lateral-stability derivatives and transfer-function coefficients from frequency-response data for lateral motions

    NASA Technical Reports Server (NTRS)

    Donegan, James J; Robinson, Samuel W , Jr; Gates, Ordway, B , jr

    1955-01-01

    A method is presented for determining the lateral-stability derivatives, transfer-function coefficients, and the modes for lateral motion from frequency-response data for a rigid aircraft. The method is based on the application of the vector technique to the equations of lateral motion, so that the three equations of lateral motion can be separated into six equations. The method of least squares is then applied to the data for each of these equations to yield the coefficients of the equations of lateral motion from which the lateral-stability derivatives and lateral transfer-function coefficients are computed. Two numerical examples are given to demonstrate the use of the method.

  1. Distance and luminosity probability distributions derived from parallax and flux with their measurement errors. With application to the millisecond pulsar PSR J0218+4232

    NASA Astrophysics Data System (ADS)

    Igoshev, Andrei; Verbunt, Frank; Cator, Eric

    2016-06-01

    We use a Bayesian approach to derive the distance probability distribution for one object from its parallax with measurement uncertainty for two spatial distribution priors, a homogeneous spherical distribution and a galactocentric distribution - applicable for radio pulsars - observed from Earth. We investigate the dependence on measurement uncertainty, and show that a parallax measurement can underestimate or overestimate the actual distance, depending on the spatial distribution prior. We derive the probability distributions for distance and luminosity combined - and for each separately when a flux with measurement error for the object is also available - and demonstrate the necessity of and dependence on the luminosity function prior. We apply this to estimate the distance and the radio and gamma-ray luminosities of PSR J0218+4232. The use of realistic priors improves the quality of the estimates for distance and luminosity compared to those based on measurement only. Use of the wrong prior, for example a homogeneous spatial distribution without upper bound, may lead to very incorrect results.

  2. The Use of Multi-Sensor Quantitative Precipitation Estimates for Deriving Extreme Precipitation Frequencies with Application in Louisiana

    NASA Astrophysics Data System (ADS)

    El-Dardiry, Hisham Abd El-Kareem

    The Radar-based Quantitative Precipitation Estimates (QPE) is one of the NEXRAD products that are available in a high temporal and spatial resolution compared with gauges. Radar-based QPEs have been widely used in many hydrological and meteorological applications; however, a few studies have focused on using radar QPE products in deriving of Precipitation Frequency Estimates (PFE). Accurate and regionally specific information on PFE is critically needed for various water resources engineering planning and design purposes. This study focused first on examining the data quality of two main radar products, the near real-time Stage IV QPE product, and the post real-time RFC/MPE product. Assessment of the Stage IV product showed some alarming data artifacts that contaminate the identification of rainfall maxima. Based on the inter-comparison analysis of the two products, Stage IV and RFC/MPE, the latter was selected for the frequency analysis carried out throughout the study. The precipitation frequency analysis approach used in this study is based on fitting Generalized Extreme Value (GEV) distribution as a statistical model for the hydrologic extreme rainfall data that based on Annual Maximum Series (AMS) extracted from 11 years (2002-2012) over a domain covering Louisiana. The parameters of the GEV model are estimated using method of linear moments (L-moments). Two different approaches are suggested for estimating the precipitation frequencies; Pixel-Based approach, in which PFEs are estimated at each individual pixel and Region-Based approach in which a synthetic sample is generated at each pixel by using observations from surrounding pixels. The region-based technique outperforms the pixel based estimation when compared with results obtained by NOAA Atlas 14; however, the availability of only short record of observations and the underestimation of radar QPE for some extremes causes considerable reduction in precipitation frequencies in pixel-based and region

  3. Experimental Determination of Effects of Frequency and Amplitude on the Lateral Stability Derivatives for a Delta, a Swept, and Unswept Wing Oscillating in Yaw

    NASA Technical Reports Server (NTRS)

    Fisher, Lewis R

    1958-01-01

    Three wing models were oscillated in yaw about their vertical axes to determine the effects of systematic variations of frequency and amplitude of oscillation on the in-phase and out-of-phase combination lateral stability derivatives resulting from this motion. The tests were made at low speeds for a 60 degree delta wing, a 45 degree swept wing, and an unswept wing; the swept and unswept wings had aspect ratios of 4. The results indicate that large changes in the magnitude of the stability derivatives due to the variation of frequency occur at high angles of attack, particularly for the delta wing. The greatest variations of the derivatives with frequency take place for the lowest frequencies of oscillation; at the higher frequencies, the effects of frequency are smaller and the derivatives become more linear with angle of attack. Effects of amplitude of oscillation on the stability derivatives for delta wings were evident for certain high angles of attack and for the lowest frequencies of oscillation. As the frequency became high, the amplitude effects tended to disappear.

  4. Inferring the Frequency Spectrum of Derived Variants to Quantify Adaptive Molecular Evolution in Protein-Coding Genes of Drosophila melanogaster.

    PubMed

    Keightley, Peter D; Campos, José L; Booker, Tom R; Charlesworth, Brian

    2016-06-01

    Many approaches for inferring adaptive molecular evolution analyze the unfolded site frequency spectrum (SFS), a vector of counts of sites with different numbers of copies of derived alleles in a sample of alleles from a population. Accurate inference of the high-copy-number elements of the SFS is difficult, however, because of misassignment of alleles as derived vs. ancestral. This is a known problem with parsimony using outgroup species. Here we show that the problem is particularly serious if there is variation in the substitution rate among sites brought about by variation in selective constraint levels. We present a new method for inferring the SFS using one or two outgroups that attempts to overcome the problem of misassignment. We show that two outgroups are required for accurate estimation of the SFS if there is substantial variation in selective constraints, which is expected to be the case for nonsynonymous sites in protein-coding genes. We apply the method to estimate unfolded SFSs for synonymous and nonsynonymous sites in a population of Drosophila melanogaster from phase 2 of the Drosophila Population Genomics Project. We use the unfolded spectra to estimate the frequency and strength of advantageous and deleterious mutations and estimate that ∼50% of amino acid substitutions are positively selected but that <0.5% of new amino acid mutations are beneficial, with a scaled selection strength of Nes ≈ 12. PMID:27098912

  5. Deriving Lifetime Maps in the Time/Frequency Domain of Coherent Structures in the Turbulent Boundary Layer

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan

    2008-01-01

    The lifetimes of coherent structures are derived from data correlated over a 3 sensor array sampling streamwise sidewall pressure at high Reynolds number (> 10(exp 8)). The data were acquired at subsonic, transonic and supersonic speeds aboard a Tupolev Tu-144. The lifetimes are computed from a variant of the correlation length termed the lifelength. Characteristic lifelengths are estimated by fitting a Gaussian distribution to the sensors cross spectra and are shown to compare favorably with Efimtsov s prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distributions, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data are converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize the behavior of coherent structures in the turbulent boundary layer.

  6. Computational Algorithm-Driven Evaluation of Monocytic Myeloid-Derived Suppressor Cell Frequency For Prediction of Clinical Outcomes

    PubMed Central

    Kitano, Shigehisa; Postow, Michael A.; Ziegler, Carly G.K.; Kuk, Deborah; Panageas, Katherine S.; Cortez, Czrina; Rasalan, Teresa; Adamow, Mathew; Yuan, Jianda; Wong, Philip; Altan-Bonnet, Gregoire; Wolchok, Jedd D.; Lesokhin, Alexander M.

    2014-01-01

    Evaluation of myeloid-derived suppressor cells (MDSC), a cell type implicated in T-cell suppression, may inform immune status. However, a uniform methodology is necessary for prospective testing as a biomarker. We report the use of a computational algorithm-driven analysis of whole blood and cryopreserved samples for monocytic MDSC (m-MDSC) quantity that removes variables related to blood processing and user definitions. Applying these methods to samples from melanoma patients identifies differing frequency distribution of m-MDSC relative to that in healthy donors (HD). Patients with a pre-treatment m-MDSC frequency outside a preliminary definition of HD range (<14.9%) were significantly more likely to achieve prolonged overall survival following treatment with ipilimumab, an antibody that promotes T-cell activation and proliferation. m-MDSC frequencies inversely correlated with peripheral CD8+ T-cell expansion following ipilimumab. Algorithm-driven analysis may enable not only development of a novel pre-treatment biomarker for ipilimumab therapy, but also prospective validation of peripheral blood m-MDSC as a biomarker in multiple disease settings. PMID:24844912

  7. Use of radar QPE for the derivation of Intensity-Duration-Frequency curves in a range of climatic regimes

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat

    2015-12-01

    Intensity-Duration-Frequency (IDF) curves are widely used in flood risk management because they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. Weather radars provide distributed rainfall estimates with high spatial and temporal resolutions and overcome the scarce representativeness of point-based rainfall for regions characterized by large gradients in rainfall climatology. This work explores the use of radar quantitative precipitation estimation (QPE) for the identification of IDF curves over a region with steep climatic transitions (Israel) using a unique radar data record (23 yr) and combined physical and empirical adjustment of the radar data. IDF relationships were derived by fitting a generalized extreme value distribution to the annual maximum series for durations of 20 min, 1 h and 4 h. Arid, semi-arid and Mediterranean climates were explored using 14 study cases. IDF curves derived from the study rain gauges were compared to those derived from radar and from nearby rain gauges characterized by similar climatology, taking into account the uncertainty linked with the fitting technique. Radar annual maxima and IDF curves were generally overestimated but in 70% of the cases (60% for a 100 yr return period), they lay within the rain gauge IDF confidence intervals. Overestimation tended to increase with return period, and this effect was enhanced in arid climates. This was mainly associated with radar estimation uncertainty, even if other effects, such as rain gauge temporal resolution, cannot be neglected. Climatological classification remained meaningful for the analysis of rainfall extremes and radar was able to discern climatology from rainfall frequency analysis.

  8. Carotid ultrasound segmentation using radio-frequency derived phase information and gabor filters.

    PubMed

    Azzopardi, Carl; Camilleri, Kenneth P; Hicks, Yulia A

    2015-01-01

    Ultrasound image segmentation is a field which has garnered much interest over the years. This is partially due to the complexity of the problem, arising from the lack of contrast between different tissue types which is quite typical of ultrasound images. Recently, segmentation techniques which treat RF signal data have also become popular, particularly with the increasing availability of such data from open-architecture machines. It is believed that RF data provides a rich source of information whose integrity remains intact, as opposed to the loss which occurs through the signal processing chain leading to Brightness Mode Images. Furthermore, phase information contained within RF data has not been studied in much detail, as the nature of the information here appears to be mostly random. In this work however, we show that phase information derived from RF data does elicit structure, characterized by texture patterns. Texture segmentation of this data permits the extraction of rough, but well localized, carotid boundaries. We provide some initial quantitative results, which report the performance of the proposed technique. PMID:26737742

  9. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...

  10. Surface Roughness of the Moon Derived from Multi-frequency Radar Data

    NASA Astrophysics Data System (ADS)

    Fa, W.

    2011-12-01

    Surface roughness of the Moon provides important information concerning both significant questions about lunar surface processes and engineering constrains for human outposts and rover trafficabillity. Impact-related phenomena change the morphology and roughness of lunar surface, and therefore surface roughness provides clues to the formation and modification mechanisms of impact craters. Since the Apollo era, lunar surface roughness has been studied using different approaches, such as direct estimation from lunar surface digital topographic relief, and indirect analysis of Earth-based radar echo strengths. Submillimeter scale roughness at Apollo landing sites has been studied by computer stereophotogrammetry analysis of Apollo Lunar Surface Closeup Camera (ALSCC) pictures, whereas roughness at meter to kilometer scale has been studied using laser altimeter data from recent missions. Though these studies shown lunar surface roughness is scale dependent that can be described by fractal statistics, roughness at centimeter scale has not been studied yet. In this study, lunar surface roughnesses at centimeter scale are investigated using Earth-based 70 cm Arecibo radar data and miniature synthetic aperture radar (Mini-SAR) data at S- and X-band (with wavelengths 12.6 cm and 4.12 cm). Both observations and theoretical modeling show that radar echo strengths are mostly dominated by scattering from the surface and shallow buried rocks. Given the different penetration depths of radar waves at these frequencies (< 30 m for 70 cm wavelength, < 3 m at S-band, and < 1 m at X-band), radar echo strengths at S- and X-band will yield surface roughness directly, whereas radar echo at 70-cm will give an upper limit of lunar surface roughness. The integral equation method is used to model radar scattering from the rough lunar surface, and dielectric constant of regolith and surface roughness are two dominate factors. The complex dielectric constant of regolith is first estimated

  11. Amerindian mitochondrial DNAs have rare Asian mutations at high frequencies, suggesting they derived from four primary maternal lineages.

    PubMed Central

    Schurr, T G; Ballinger, S W; Gan, Y Y; Hodge, J A; Merriwether, D A; Lawrence, D N; Knowler, W C; Weiss, K M; Wallace, D C

    1990-01-01

    The mitochondrial DNA (mtDNA) sequence variation of the South American Ticuna, the Central American Maya, and the North American Pima was analyzed by restriction-endonuclease digestion and oligonucleotide hybridization. The analysis revealed that Amerindian populations have high frequencies of mtDNAs containing the rare Asian RFLP HincII morph 6, a rare HaeIII site gain, and a unique AluI site gain. In addition, the Asian-specific deletion between the cytochrome c oxidase subunit II (COII) and tRNA(Lys) genes was also prevalent in both the Pima and the Maya. These data suggest that Amerindian mtDNAs derived from at least four primary maternal lineages, that new tribal-specific variants accumulated as these mtDNAs became distributed throughout the Americas, and that some genetic variation may have been lost when the progenitors of the Ticuna separated from the North and Central American populations. Images Figure 1 PMID:1968708

  12. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  13. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2008-03-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  14. Motion error analysis of the 3D coordinates of airborne lidar for typical terrains

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Lan, Tian; Ni, Guoqiang

    2013-07-01

    A motion error model of 3D coordinates is established and the impact on coordinate errors caused by the non-ideal movement of the airborne platform is analyzed. The simulation results of the model show that when the lidar system operates at high altitude, the influence on the positioning errors derived from laser point cloud spacing is small. For the model the positioning errors obey simple harmonic vibration whose amplitude envelope gradually reduces with the increase of the vibration frequency. When the vibration period number is larger than 50, the coordinate errors are almost uncorrelated with time. The elevation error is less than the plane error and in the plane the error in the scanning direction is less than the error in the flight direction. Through the analysis of flight test data, the conclusion is verified.

  15. Remote Sensing Derived Fire Frequency, Soil Moisture and Ecosystem Productivity Explain Regional Movements in Emu over Australia

    PubMed Central

    Madani, Nima; Kimball, John S.; Nazeri, Mona; Kumar, Lalit; Affleck, David L. R.

    2016-01-01

    Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m-3) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species’ ecological habitat niche across Australia. PMID:26799732

  16. Remote Sensing Derived Fire Frequency, Soil Moisture and Ecosystem Productivity Explain Regional Movements in Emu over Australia.

    PubMed

    Madani, Nima; Kimball, John S; Nazeri, Mona; Kumar, Lalit; Affleck, David L R

    2016-01-01

    Species distribution modeling has been widely used in studying habitat relationships and for conservation purposes. However, neglecting ecological knowledge about species, e.g. their seasonal movements, and ignoring the proper environmental factors that can explain key elements for species survival (shelter, food and water) increase model uncertainty. This study exemplifies how these ecological gaps in species distribution modeling can be addressed by modeling the distribution of the emu (Dromaius novaehollandiae) in Australia. Emus cover a large area during the austral winter. However, their habitat shrinks during the summer months. We show evidence of emu summer habitat shrinkage due to higher fire frequency, and low water and food availability in northern regions. Our findings indicate that emus prefer areas with higher vegetation productivity and low fire recurrence, while their distribution is linked to an optimal intermediate (~0.12 m3 m(-3)) soil moisture range. We propose that the application of three geospatial data products derived from satellite remote sensing, namely fire frequency, ecosystem productivity, and soil water content, provides an effective representation of emu general habitat requirements, and substantially improves species distribution modeling and representation of the species' ecological habitat niche across Australia. PMID:26799732

  17. Relation of Cloud Occurrence Frequency, Overlap, and Effective Thickness Derived from CALIPSO and CloudSat Merged Cloud Vertical Profiles

    NASA Technical Reports Server (NTRS)

    Kato, Seiji; Sun-Mack, Sunny; Miller, Walter F.; Rose, Fred G.; Chen, Yan; Minnis, Patrick; Wielicki, Bruce A.

    2009-01-01

    A cloud frequency of occurrence matrix is generated using merged cloud vertical profile derived from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and Cloud Profiling Radar (CPR). The matrix contains vertical profiles of cloud occurrence frequency as a function of the uppermost cloud top. It is shown that the cloud fraction and uppermost cloud top vertical pro les can be related by a set of equations when the correlation distance of cloud occurrence, which is interpreted as an effective cloud thickness, is introduced. The underlying assumption in establishing the above relation is that cloud overlap approaches the random overlap with increasing distance separating cloud layers and that the probability of deviating from the random overlap decreases exponentially with distance. One month of CALIPSO and CloudSat data support these assumptions. However, the correlation distance sometimes becomes large, which might be an indication of precipitation. The cloud correlation distance is equivalent to the de-correlation distance introduced by Hogan and Illingworth [2000] when cloud fractions of both layers in a two-cloud layer system are the same.

  18. Extremely low-frequency electromagnetic field influences the survival and proliferation effect of human adipose derived stem cells

    PubMed Central

    Razavi, Shahnaz; Salimi, Marzieh; Shahbazi-Gahrouei, Daryoush; Karbasi, Saeed; Kermani, Saeed

    2014-01-01

    Background: Extremely low-frequency electromagnetic fields (ELF-EMF) can effect on biological systems and alters some cell functions like proliferation rate. Therefore, we aimed to attempt the evaluation effect of ELF-EMF on the growth of human adipose derived stem cells (hADSCs). Materials and Methods: ELF-EMF was generated by a system including autotransformer, multi-meter, solenoid coils, teslameter and its probe. We assessed the effect of ELF-EMF with intensity of 0.5 and 1 mT and power line frequency 50 Hz on the survival of hADSCs for 20 and 40 min/day for 7 days by MTT assay. One-way analysis of variance was used to assessment the significant differences in groups. Results: ELF-EMF has maximum effect with intensity of 1 mT for 20 min/day on proliferation of hADSCs. The survival and proliferation effect (PE) in all exposure groups were significantly higher than that in sham groups (P < 0.05) except in group of 1 mT and 40 min/day. Conclusion: Our results show that between 0.5 m and 1 mT ELF-EMF could be enhances survival and PE of hADSCs conserving the duration of exposure. PMID:24592372

  19. Stroma Cell-Derived Factor-1α Signaling Enhances Calcium Transients and Beating Frequency in Rat Neonatal Cardiomyocytes

    PubMed Central

    Hadad, Ielham; Veithen, Alex; Springael, Jean–Yves; Sotiropoulou, Panagiota A.; Mendes Da Costa, Agnès; Miot, Françoise; Naeije, Robert

    2013-01-01

    Stroma cell-derived factor-1α (SDF-1α) is a cardioprotective chemokine, acting through its G-protein coupled receptor CXCR4. In experimental acute myocardial infarction, administration of SDF-1α induces an early improvement of systolic function which is difficult to explain solely by an anti-apoptotic and angiogenic effect. We wondered whether SDF-1α signaling might have direct effects on calcium transients and beating frequency. Primary rat neonatal cardiomyocytes were culture-expanded and characterized by immunofluorescence staining. Calcium sparks were studied by fluorescence microscopy after calcium loading with the Fluo-4 acetoxymethyl ester sensor. The cardiomyocyte enriched cellular suspension expressed troponin I and CXCR4 but was vimentin negative. Addition of SDF-1α in the medium increased cytoplasmic calcium release. The calcium response was completely abolished by using a neutralizing anti-CXCR4 antibody and partially suppressed and delayed by preincubation with an inositol triphosphate receptor (IP3R) blocker, but not with a ryanodine receptor (RyR) antagonist. Calcium fluxes induced by caffeine, a RyR agonist, were decreased by an IP3R blocker. Treatment with forskolin or SDF-1α increased cardiomyocyte beating frequency and their effects were additive. In vivo, treatment with SDF-1α increased left ventricular dP/dtmax. These results suggest that in rat neonatal cardiomyocytes, the SDF-1α/CXCR4 signaling increases calcium transients in an IP3-gated fashion leading to a positive chronotropic and inotropic effect. PMID:23460790

  20. Stroma cell-derived factor-1α signaling enhances calcium transients and beating frequency in rat neonatal cardiomyocytes.

    PubMed

    Hadad, Ielham; Veithen, Alex; Springael, Jean-Yves; Sotiropoulou, Panagiota A; Mendes Da Costa, Agnès; Miot, Françoise; Naeije, Robert; De Deken, Xavier; Entee, Kathleen Mc

    2013-01-01

    Stroma cell-derived factor-1α (SDF-1α) is a cardioprotective chemokine, acting through its G-protein coupled receptor CXCR4. In experimental acute myocardial infarction, administration of SDF-1α induces an early improvement of systolic function which is difficult to explain solely by an anti-apoptotic and angiogenic effect. We wondered whether SDF-1α signaling might have direct effects on calcium transients and beating frequency.Primary rat neonatal cardiomyocytes were culture-expanded and characterized by immunofluorescence staining. Calcium sparks were studied by fluorescence microscopy after calcium loading with the Fluo-4 acetoxymethyl ester sensor. The cardiomyocyte enriched cellular suspension expressed troponin I and CXCR4 but was vimentin negative. Addition of SDF-1α in the medium increased cytoplasmic calcium release. The calcium response was completely abolished by using a neutralizing anti-CXCR4 antibody and partially suppressed and delayed by preincubation with an inositol triphosphate receptor (IP3R) blocker, but not with a ryanodine receptor (RyR) antagonist. Calcium fluxes induced by caffeine, a RyR agonist, were decreased by an IP3R blocker. Treatment with forskolin or SDF-1α increased cardiomyocyte beating frequency and their effects were additive. In vivo, treatment with SDF-1α increased left ventricular dP/dtmax.These results suggest that in rat neonatal cardiomyocytes, the SDF-1α/CXCR4 signaling increases calcium transients in an IP3-gated fashion leading to a positive chronotropic and inotropic effect. PMID:23460790

  1. Tunable error-free optical frequency conversion of a 4ps optical short pulse over 25 nm by four-wave mixing in a polarisation-maintaining optical fibre

    NASA Astrophysics Data System (ADS)

    Morioka, T.; Kawanishi, S.; Saruwatari, M.

    1994-05-01

    Error-free, tunable optical frequency conversion of a transform-limited 4.0 ps optical pulse signalis demonstrated at 6.3 Gbit/s using four-wave mixing in a polarization-maintaining optical fibre. The process generates 4.0-4.6 ps pulses over a 25nm range with time-bandwidth products of 0.31-0.43 and conversion power penalties of less than 1.5 dB.

  2. Unique plasmids generated via pUC replicon mutagenesis in an error-prone thermophile derived from Geobacillus kaustophilus HTA426.

    PubMed

    Kobayashi, Jyumpei; Tanabiki, Misaki; Doi, Shohei; Kondo, Akihiko; Ohshiro, Takashi; Suzuki, Hirokazu

    2015-11-01

    The plasmid pGKE75-catA138T, which comprises pUC18 and the catA138T gene encoding thermostable chloramphenicol acetyltransferase with an A138T amino acid replacement (CATA138T), serves as an Escherichia coli-Geobacillus kaustophilus shuttle plasmid that confers moderate chloramphenicol resistance on G. kaustophilus HTA426. The present study examined the thermoadaptation-directed mutagenesis of pGKE75-catA138T in an error-prone thermophile, generating the mutant plasmid pGKE75(αβ)-catA138T responsible for substantial chloramphenicol resistance at 65°C. pGKE75(αβ)-catA138T contained no mutation in the catA138T gene but had two mutations in the pUC replicon, even though the replicon has no apparent role in G. kaustophilus. Biochemical characterization suggested that the efficient chloramphenicol resistance conferred by pGKE75(αβ)-catA138T is attributable to increases in intracellular CATA138T and acetyl-coenzyme A following a decrease in incomplete forms of pGKE75(αβ)-catA138T. The decrease in incomplete plasmids may be due to optimization of plasmid replication by RNA species transcribed from the mutant pUC replicon, which were actually produced in G. kaustophilus. It is noteworthy that G. kaustophilus was transformed with pGKE75(αβ)-catA138T using chloramphenicol selection at 60°C. In addition, a pUC18 derivative with the two mutations propagated in E. coli at a high copy number independently of the culture temperature and high plasmid stability. Since these properties have not been observed in known plasmids, the outcomes extend the genetic toolboxes for G. kaustophilus and E. coli. PMID:26319877

  3. Unique Plasmids Generated via pUC Replicon Mutagenesis in an Error-Prone Thermophile Derived from Geobacillus kaustophilus HTA426

    PubMed Central

    Kobayashi, Jyumpei; Tanabiki, Misaki; Doi, Shohei; Kondo, Akihiko; Ohshiro, Takashi

    2015-01-01

    The plasmid pGKE75-catA138T, which comprises pUC18 and the catA138T gene encoding thermostable chloramphenicol acetyltransferase with an A138T amino acid replacement (CATA138T), serves as an Escherichia coli-Geobacillus kaustophilus shuttle plasmid that confers moderate chloramphenicol resistance on G. kaustophilus HTA426. The present study examined the thermoadaptation-directed mutagenesis of pGKE75-catA138T in an error-prone thermophile, generating the mutant plasmid pGKE75αβ-catA138T responsible for substantial chloramphenicol resistance at 65°C. pGKE75αβ-catA138T contained no mutation in the catA138T gene but had two mutations in the pUC replicon, even though the replicon has no apparent role in G. kaustophilus. Biochemical characterization suggested that the efficient chloramphenicol resistance conferred by pGKE75αβ-catA138T is attributable to increases in intracellular CATA138T and acetyl-coenzyme A following a decrease in incomplete forms of pGKE75αβ-catA138T. The decrease in incomplete plasmids may be due to optimization of plasmid replication by RNA species transcribed from the mutant pUC replicon, which were actually produced in G. kaustophilus. It is noteworthy that G. kaustophilus was transformed with pGKE75αβ-catA138T using chloramphenicol selection at 60°C. In addition, a pUC18 derivative with the two mutations propagated in E. coli at a high copy number independently of the culture temperature and high plasmid stability. Since these properties have not been observed in known plasmids, the outcomes extend the genetic toolboxes for G. kaustophilus and E. coli. PMID:26319877

  4. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  5. Error coding simulations

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  6. The Effect of Verb Semantic Class and Verb Frequency (Entrenchment) on Children's and Adults' Graded Judgements of Argument-Structure Overgeneralization Errors

    ERIC Educational Resources Information Center

    Ambridge, Ben; Pine, Julian M.; Rowland, Caroline F.; Young, Chris R.

    2008-01-01

    Participants (aged 5-6 yrs, 9-10 yrs and adults) rated (using a five-point scale) grammatical (intransitive) and overgeneralized (transitive causative) uses of a high frequency, low frequency and novel intransitive verb from each of three semantic classes [Pinker, S. (1989a). "Learnability and cognition: the acquisition of argument structure."…

  7. Modeling the evolution and distribution of the frequency's second derivative and the braking index of pulsar spin

    NASA Astrophysics Data System (ADS)

    Xie, Yi; Zhang, Shuang-Nan; Liao, Jin-Yuan

    2015-07-01

    We model the evolution of the spin frequency's second derivative v̈ and the braking index n of radio pulsars with simulations within the phenomenological model of their surface magnetic field evolution, which contains a long-term power-law decay modulated by short-term oscillations. For the pulsar PSR B0329+54, a model with three oscillation components can reproduce its v̈ variation. We show that the “averaged” n is different from the instantaneous n, and its oscillation magnitude decreases abruptly as the time span increases, due to the “averaging” effect. The simulated timing residuals agree with the main features of the reported data. Our model predicts that the averaged v̈ of PSR B0329+54 will start to decrease rapidly with newer data beyond those used in Hobbs et al. We further perform Monte Carlo simulations for the distribution of the reported data in |v̈| and |n| versus characteristic age τC diagrams. It is found that the magnetic field oscillation model with decay index α = 0 can reproduce the distributions quite well. Compared with magnetic field decay due to the ambipolar diffusion (α = 0.5) and the Hall cascade (α = 1.0), the model with no long term decay (α = 0) is clearly preferred for old pulsars by the p-values of the two-dimensional Kolmogorov-Smirnov test. Supported by the National Natural Science Foundation of China.

  8. High frequency production of rapeseed transgenic plants via combination of microprojectile bombardment and secondary embryogenesis of microspore-derived embryos.

    PubMed

    Abdollahi, M R; Moieni, A; Mousavi, A; Salmanian, A H

    2011-02-01

    Transgenic doubled haploid rapeseed (Brassica napus L. cvs. Global and PF(704)) plants were obtained from microspore-derived embryo (MDE) hypocotyls using the microprojectile bombardment. The binary vector pCAMBIA3301 containing the gus and bar genes under control of CaMV 35S promoter was used for bombardment experiments. Transformed plantlets were selected and continuously maintained on selective medium containing 10 mg l(-1) phosphinothricin (PPT) and transgenic plants were obtained by selecting transformed secondary embryos. The presence, copy numbers and expression of the transgenes were confirmed by PCR, Southern blot, RT-PCR and histochemical GUS analyses. In progeny test, three out of four primary transformants for bar gene produced homozygous lines. The ploidy level of transformed plants was confirmed by flow cytometery analysis before colchicine treatment. All of the regenerated plants were haploid except one that was spontaneous diploid. High frequency of transgenic doubled haploid rapeseeds (about 15.55% for bar gene and 11.11% for gus gene) were considerably produced after colchicines treatment of the haploid plantlets. This result show a remarkable increase in production of transgenic doubled haploid rapeseed plants compared to previous studies. PMID:20419350

  9. Financial errors in dementia: Testing a neuroeconomic conceptual framework

    PubMed Central

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.

    2013-01-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884

  10. Influence of low-spatial frequency ripples in machined potassium dihydrogen phosphate crystal surfaces on wavefront errors based on the wavelet method

    NASA Astrophysics Data System (ADS)

    Chen, Wanqun; Sun, Yazhou

    2015-02-01

    In using a fly cutter to machine potassium dihydrogen phosphate (KDP) crystals, rippling in machined surfaces will remain that will have a significant impact on the optical performance. An analysis of these low-spatial frequency ripples is presented and its influence on the root-mean-squared gradient (GRMS) of the wavefront discussed. A frequency analysis of the machined KDP crystal surfaces is performed using wavelet transform and power spectral density methods. Based on a classification of the time frequencies for these macroripples, the multimode vibration of the machine tool is found to be the main reason surface ripples are produced. Improvements in the machine design parameters are proposed to limit such effects on the wavefront performance of the KDP crystal.

  11. Sensitivity of tissue properties derived from MRgFUS temperature data to input errors and data inclusion criteria: ex vivo study in porcine muscle.

    PubMed

    Shi, Y C; Parker, D L; Dillon, C R

    2016-08-01

    This study evaluates the sensitivity of two magnetic resonance-guided focused ultrasound (MRgFUS) thermal property estimation methods to errors in required inputs and different data inclusion criteria. Using ex vivo pork muscle MRgFUS data, sensitivities to required inputs are determined by introducing errors to ultrasound beam locations (r error  =  -2 to 2 mm) and time vectors (t error  =  -2.2 to 2.2 s). In addition, the sensitivity to user-defined data inclusion criteria is evaluated by choosing different spatial (r fit  =  1-10 mm) and temporal (t fit  =  8.8-61.6 s) regions for fitting. Beam location errors resulted in up to 50% change in property estimates with local minima occurring at r error  =  0 and estimate errors less than 10% when r error  <  0.5 mm. Errors in the time vector led to property estimate errors up to 40% and without local minimum, indicating the need to trigger ultrasound sonications with the MR image acquisition. Regarding the selection of data inclusion criteria, property estimates reached stable values (less than 5% change) when r fit  >  2.5  ×  FWHM, and were most accurate with the least variability for longer t fit. Guidelines provided by this study highlight the importance of identifying required inputs and choosing appropriate data inclusion criteria for robust and accurate thermal property estimation. Applying these guidelines will prevent the introduction of biases and avoidable errors when utilizing these property estimation techniques for MRgFUS thermal modeling applications. PMID:27385508

  12. Sensitivity of tissue properties derived from MRgFUS temperature data to input errors and data inclusion criteria: ex vivo study in porcine muscle

    NASA Astrophysics Data System (ADS)

    Shi, Y. C.; Parker, D. L.; Dillon, C. R.

    2016-08-01

    This study evaluates the sensitivity of two magnetic resonance-guided focused ultrasound (MRgFUS) thermal property estimation methods to errors in required inputs and different data inclusion criteria. Using ex vivo pork muscle MRgFUS data, sensitivities to required inputs are determined by introducing errors to ultrasound beam locations (r error  =  ‑2 to 2 mm) and time vectors (t error  =  ‑2.2 to 2.2 s). In addition, the sensitivity to user-defined data inclusion criteria is evaluated by choosing different spatial (r fit  =  1–10 mm) and temporal (t fit  =  8.8–61.6 s) regions for fitting. Beam location errors resulted in up to 50% change in property estimates with local minima occurring at r error  =  0 and estimate errors less than 10% when r error  <  0.5 mm. Errors in the time vector led to property estimate errors up to 40% and without local minimum, indicating the need to trigger ultrasound sonications with the MR image acquisition. Regarding the selection of data inclusion criteria, property estimates reached stable values (less than 5% change) when r fit  >  2.5  ×  FWHM, and were most accurate with the least variability for longer t fit. Guidelines provided by this study highlight the importance of identifying required inputs and choosing appropriate data inclusion criteria for robust and accurate thermal property estimation. Applying these guidelines will prevent the introduction of biases and avoidable errors when utilizing these property estimation techniques for MRgFUS thermal modeling applications.

  13. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  14. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  15. Kiwifruit-derived supplements increase stool frequency in healthy adults: a randomized, double-blind, placebo-controlled study.

    PubMed

    Ansell, Juliet; Butts, Christine A; Paturi, Gunaranjan; Eady, Sarah L; Wallace, Alison J; Hedderley, Duncan; Gearry, Richard B

    2015-05-01

    The worldwide growth in the incidence of gastrointestinal disorders has created an immediate need to identify safe and effective interventions. In this randomized, double-blind, placebo-controlled study, we examined the effects of Actazin and Gold, kiwifruit-derived nutritional ingredients, on stool frequency, stool form, and gastrointestinal comfort in healthy and functionally constipated (Rome III criteria for C3 functional constipation) individuals. Using a crossover design, all participants consumed all 4 dietary interventions (Placebo, Actazin low dose [Actazin-L] [600 mg/day], Actazin high dose [Actazin-H] [2400 mg/day], and Gold [2400 mg/day]). Each intervention was taken for 28 days followed by a 14-day washout period between interventions. Participants recorded their daily bowel movements and well-being parameters in daily questionnaires. In the healthy cohort (n = 19), the Actazin-H (P = .014) and Gold (P = .009) interventions significantly increased the mean daily bowel movements compared with the washout. No significant differences were observed in stool form as determined by use of the Bristol stool scale. In a subgroup analysis of responders in the healthy cohort, Actazin-L (P = .005), Actazin-H (P < .001), and Gold (P = .001) consumption significantly increased the number of daily bowel movements by greater than 1 bowel movement per week. In the functionally constipated cohort (n = 9), there were no significant differences between interventions for bowel movements and the Bristol stool scale values or in the subsequent subgroup analysis of responders. This study demonstrated that Actazin and Gold produced clinically meaningful increases in bowel movements in healthy individuals. PMID:25931419

  16. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  17. Technical Note: Calculation of standard errors of estimates of genetic parameters with the multiple-trait derivative-free restricted maximal likelihood programs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The MTDFREML (Boldman et al., 1995) set of programs was written to handle partially missing data in an expedient manner. When estimating (co)variance components and genetic parameters for multiple trait models, the programs have not been able to estimate standard errors of those estimates for multi...

  18. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    PubMed Central

    Jacob, Benjamin G; Griffith, Daniel A; Muturi, Ephantus J; Caamano, Erick X; Githure, John I; Novak, Robert J

    2009-01-01

    Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices) in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression). The eigenfunction values from the spatial

  19. Low-frequency, low-magnitude vibrations (LFLM) enhances chondrogenic differentiation potential of human adipose derived mesenchymal stromal stem cells (hASCs).

    PubMed

    Marycz, Krzysztof; Lewandowski, Daniel; Tomaszewski, Krzysztof A; Henry, Brandon M; Golec, Edward B; Marędziak, Monika

    2016-01-01

    The aim of this study was to evaluate if low-frequency, low-magnitude vibrations (LFLM) could enhance chondrogenic differentiation potential of human adipose derived mesenchymal stem cells (hASCs) with simultaneous inhibition of their adipogenic properties for biomedical purposes. We developed a prototype device that induces low-magnitude (0.3 g) low-frequency vibrations with the following frequencies: 25, 35 and 45 Hz. Afterwards, we used human adipose derived mesenchymal stem cell (hASCS), to investigate their cellular response to the mechanical signals. We have also evaluated hASCs morphological and proliferative activity changes in response to each frequency. Induction of chondrogenesis in hASCs, under the influence of a 35 Hz signal leads to most effective and stable cartilaginous tissue formation through highest secretion of Bone Morphogenetic Protein 2 (BMP-2), and Collagen type II, with low concentration of Collagen type I. These results correlated well with appropriate gene expression level. Simultaneously, we observed significant up-regulation of α3, α4, β1 and β3 integrins in chondroblast progenitor cells treated with 35 Hz vibrations, as well as Sox-9. Interestingly, we noticed that application of 35 Hz frequencies significantly inhibited adipogenesis of hASCs. The obtained results suggest that application of LFLM vibrations together with stem cell therapy might be a promising tool in cartilage regeneration. PMID:26966645

  20. Low-frequency, low-magnitude vibrations (LFLM) enhances chondrogenic differentiation potential of human adipose derived mesenchymal stromal stem cells (hASCs)

    PubMed Central

    Lewandowski, Daniel; Tomaszewski, Krzysztof A.; Henry, Brandon M.; Golec, Edward B.; Marędziak, Monika

    2016-01-01

    The aim of this study was to evaluate if low-frequency, low-magnitude vibrations (LFLM) could enhance chondrogenic differentiation potential of human adipose derived mesenchymal stem cells (hASCs) with simultaneous inhibition of their adipogenic properties for biomedical purposes. We developed a prototype device that induces low-magnitude (0.3 g) low-frequency vibrations with the following frequencies: 25, 35 and 45 Hz. Afterwards, we used human adipose derived mesenchymal stem cell (hASCS), to investigate their cellular response to the mechanical signals. We have also evaluated hASCs morphological and proliferative activity changes in response to each frequency. Induction of chondrogenesis in hASCs, under the influence of a 35 Hz signal leads to most effective and stable cartilaginous tissue formation through highest secretion of Bone Morphogenetic Protein 2 (BMP-2), and Collagen type II, with low concentration of Collagen type I. These results correlated well with appropriate gene expression level. Simultaneously, we observed significant up-regulation of α3, α4, β1 and β3 integrins in chondroblast progenitor cells treated with 35 Hz vibrations, as well as Sox-9. Interestingly, we noticed that application of 35 Hz frequencies significantly inhibited adipogenesis of hASCs. The obtained results suggest that application of LFLM vibrations together with stem cell therapy might be a promising tool in cartilage regeneration. PMID:26966645

  1. A voxel-based technique to estimate volume and volumetric error of terrestrial photogrammetry-derived digital terrain models (DTM) of topographic depressions

    NASA Astrophysics Data System (ADS)

    Székely, Balázs; Raveloson, Andrea; Rasztovits, Sascha; Molnár, Gábor; Dorninger, Peter

    2013-04-01

    It is a common task in geoscience to determine the volume of a topographic depression (e.g., a valley, a crater, a gully, etc.) based on a digital terrain model (DTM). In case of DTMs based on laser scanned data this task can be fulfilled with a relatively high accuracy. However, if the DTM is generated using terrestrial photogrammetric methods, the limitations of the technology often makes geodetically inaccurate/biased models at forested or purely visible areas or if the landform has an ill-posed geometry (e.g. it is elongated). In these cases the inaccuracies may hamper the generation of a proper DTM. On the other hand if we are interested rather in the determination of the volume of the feature with a certain accuracy or we intend to carry out an order of magnitude volumetric estimation, a DTM having larger inaccuracies is tolerable. In this case the volume calculation can be still done by setting realistic assumptions about the errors of the DTM. In our approach two DTMs are generated to create top and bottom envelope surfaces that confine the "true" but unknown DTM. The varying accuracy of the photogrammetric DTM is considered via the varying deviation of these two surfaces: at problematic corners of the feature the deviation of the two surfaces will be larger, whereas at well-renderable domains the deviation of the surfaces remain minimal. Since such topographic depressions may have a complicated geometry, the error-prone areas may complicate the geometry of the aforementioned envelopes even more. The proper calculation of the volume may turn to be difficult. To reduce this difficulty, a voxel-based approach is used. The volumetric error is calculated based on the gridded envelopes using an appropriate voxel resolution. The method is applied for gully features termed lavakas existing in large numbers in Madagascar. These landforms are typically characterised by a complex shape, steep walls, they are often elongated, and have internal crests. All these

  2. Automatic Locking of Laser Frequency to an Absorption Peak

    NASA Technical Reports Server (NTRS)

    Koch, Grady J.

    2006-01-01

    An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that

  3. Atmospheric absorption model for dry air and water vapor at microwave frequencies below 100 GHz derived from spaceborne radiometer observations

    NASA Astrophysics Data System (ADS)

    Wentz, Frank J.; Meissner, Thomas

    2016-05-01

    The Liebe and Rosenkranz atmospheric absorption models for dry air and water vapor below 100 GHz are refined based on an analysis of antenna temperature (TA) measurements taken by the Global Precipitation Measurement Microwave Imager (GMI) in the frequency range 10.7 to 89.0 GHz. The GMI TA measurements are compared to the TA predicted by a radiative transfer model (RTM), which incorporates both the atmospheric absorption model and a model for the emission and reflection from a rough-ocean surface. The inputs for the RTM are the geophysical retrievals of wind speed, columnar water vapor, and columnar cloud liquid water obtained from the satellite radiometer WindSat. The Liebe and Rosenkranz absorption models are adjusted to achieve consistency with the RTM. The vapor continuum is decreased by 3% to 10%, depending on vapor. To accomplish this, the foreign-broadening part is increased by 10%, and the self-broadening part is decreased by about 40% at the higher frequencies. In addition, the strength of the water vapor line is increased by 1%, and the shape of the line at low frequencies is modified. The dry air absorption is increased, with the increase being a maximum of 20% at the 89 GHz, the highest frequency considered here. The nonresonant oxygen absorption is increased by about 6%. In addition to the RTM comparisons, our results are supported by a comparison between columnar water vapor retrievals from 12 satellite microwave radiometers and GPS-retrieved water vapor values.

  4. Retransmission error control with memory

    NASA Technical Reports Server (NTRS)

    Sindhu, P. S.

    1977-01-01

    In this paper, an error control technique that is a basic improvement over automatic-repeat-request ARQ is presented. Erroneously received blocks in an ARQ system are used for error control. The technique is termed ARQ-with-memory (MRQ). The general MRQ system is described, and simple upper and lower bounds are derived on the throughput achievable by MRQ. The performance of MRQ with respect to throughput, message delay and probability of error is compared to that of ARQ by simulating both systems using error data from a VHF satellite channel being operated in the ALOHA packet broadcasting mode.

  5. Causal determination of acoustic group velocity and frequency derivative of attenuation with finite-bandwidth Kramers-Kronig relations

    NASA Astrophysics Data System (ADS)

    Mobley, Joel; Waters, Kendall R.; Miller, James G.

    2005-07-01

    Kramers-Kronig (KK) analyses of experimental data are complicated by the extrapolation problem, that is, how the unexamined spectral bands impact KK calculations. This work demonstrates the causal linkages in resonant-type data provided by acoustic KK relations for the group velocity (cg) and the derivative of the attenuation coefficient (α') (components of the derivative of the acoustic complex wave number) without extrapolation or unmeasured parameters. These relations provide stricter tests of causal consistency relative to previously established KK relations for the phase velocity (cp) and attenuation coefficient (α) (components of the undifferentiated acoustic wave number) due to their shape invariance with respect to subtraction constants. For both the group velocity and attenuation derivative, three forms of the relations are derived. These relations are equivalent for bandwidths covering the entire infinite spectrum, but differ when restricted to bandlimited spectra. Using experimental data from suspensions of elastic spheres in saline, the accuracy of finite-bandwidth KK predictions for cg and α' is demonstrated. Of the multiple methods, the most accurate were found to be those whose integrals were expressed only in terms of the phase velocity and attenuation coefficient themselves, requiring no differentiated quantities.

  6. Standard Errors of the Kernel Equating Methods under the Common-Item Design.

    ERIC Educational Resources Information Center

    Liou, Michelle; And Others

    This research derives simplified formulas for computing the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function (P. W. Holland, B. F. King, and D. T. Thayer, 1989; Holland and Thayer, 1987). The simplified formulas are applicable to equating both the…

  7. Phase Errors and the Capture Effect

    SciTech Connect

    Blair, J., and Machorro, E.

    2011-11-01

    This slide-show presents analysis of spectrograms and the phase error of filtered noise in a signal. When the filtered noise is smaller than the signal amplitude, the phase error can never exceed 90{deg}, so the average phase error over many cycles is zero: this is called the capture effect because the largest signal captures the phase and frequency determination.

  8. Acoustic-frequency vibratory stimulation regulates the balance between osteogenesis and adipogenesis of human bone marrow-derived mesenchymal stem cells.

    PubMed

    Chen, Xi; He, Fan; Zhong, Dong-Yan; Luo, Zong-Ping

    2015-01-01

    Osteoporosis can be associated with the disordered balance between osteogenesis and adipogenesis of bone marrow-derived mesenchymal stem cells (BM-MSCs). Although low-frequency mechanical vibration has been demonstrated to promote osteogenesis, little is known about the influence of acoustic-frequency vibratory stimulation (AFVS). BM-MSCs were subjected to AFVS at frequencies of 0, 30, 400, and 800 Hz and induced toward osteogenic or adipogenic-specific lineage. Extracellular matrix mineralization was determined by Alizarin Red S staining and lipid accumulation was assessed by Oil Red O staining. Transcript levels of osteogenic and adipogenic marker genes were evaluated by real-time reverse transcription-polymerase chain reaction. Cell proliferation of BM-MSCs was promoted following exposure to AFVS at 800 Hz. Vibration at 800 Hz induced the highest level of calcium deposition and significantly increased mRNA expression of COL1A1, ALP, RUNX2, and SPP1. The 800 Hz group downregulated lipid accumulation and levels of adipogenic genes, including FABP4, CEBPA, PPARG, and LEP, while vibration at 30 Hz supported adipogenesis. BM-MSCs showed a frequency-dependent response to acoustic vibration. AFVS at 800 Hz was the most favorable for osteogenic differentiation and simultaneously suppressed adipogenesis. Thus, acoustic vibration could potentially become a novel means to prevent and treat osteoporosis. PMID:25738155

  9. Stem cell derived in vivo-like human cardiac bodies in a microfluidic device for toxicity testing by beating frequency imaging.

    PubMed

    Bergström, Gunnar; Christoffersson, Jonas; Schwanke, Kristin; Zweigerdt, Robert; Mandenius, Carl-Fredrik

    2015-08-01

    Beating in vivo-like human cardiac bodies (CBs) were used in a microfluidic device for testing cardiotoxicity. The CBs, cardiomyocyte cell clusters derived from induced pluripotent stem cells, exhibited typical structural and functional properties of the native human myocardium. The CBs were captured in niches along a perfusion channel in the device. Video imaging was utilized for automatic monitoring of the beating frequency of each individual CB. The device allowed assessment of cardiotoxic effects of drug substances doxorubicin, verapamil and quinidine on the 3D clustered cardiomyocytes. Beating frequency data recorded over a period of 6 hours are presented and compared to literature data. The results indicate that this microfluidic setup with imaging of CB characteristics provides a new opportunity for label-free, non-invasive investigation of toxic effects in a 3D microenvironment. PMID:26135270

  10. Comment on ''Quasinormal modes in Schwarzschild-de Sitter spacetime: A simple derivation of the level spacing of the frequencies''

    SciTech Connect

    Batic, D.; Kelkar, N. G.; Nowakowski, M.

    2011-05-15

    It is shown here that the extraction of quasinormal modes within the first Born approximation of the scattering amplitude is mathematically not well-founded. Indeed, the constraints on the existence of the scattering amplitude integral lead to inequalities for the imaginary parts of the quasinormal mode frequencies. For instance, in the Schwarzschild case, 0{<=}{omega}{sub I}<{kappa} (where {kappa} is the surface gravity at the horizon) invalidates the poles deduced from the first Born approximation method, namely, {omega}{sub n}=in{kappa}.

  11. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  12. Automatic oscillator frequency control system

    NASA Technical Reports Server (NTRS)

    Smith, S. F. (Inventor)

    1985-01-01

    A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.

  13. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  14. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  15. DFT and ab initio calculations of the vibrational frequencies and visible spectra of triazenes derived from cyclic amines

    NASA Astrophysics Data System (ADS)

    Dabbagh, Hossein A.; Teimouri, Abbas; Chermahini, Alireza Najafi; Shiasi, Rezvan

    2007-06-01

    We present a detailed analysis of the structural, infrared spectra and visible spectra of the 4-substituted aminoazo-benzenesulfonyl azides. The preparation of 4-sulfonyl azide benzenediazonium chloride with cyclic amines of various ring sizes (pyrrolidine, piperidine, 4-methylpiperidine, N-methylpiperazine, morpholine and hexamethyleneimine) have been investigated theoretically by performing HF and DFT levels of theory using the standard 6-31G* basis set. The optimized geometries and calculated vibrational frequencies are evaluated via comparison with experimental values. The vibrational spectral data obtained from solid phase FT-IR spectra are assigned modes based on the results of the theoretical calculations. The observed spectra are found to be in good agreement with the calculations.

  16. High frequency of mononuclear myeloid-derived suppressor cells is associated with exacerbation of inflammatory bowel disease.

    PubMed

    Xi, Qinhua; Li, Yueqin; Dai, Juan; Chen, Weichang

    2015-01-01

    Exacerbation and relapse of inflammatory bowel disease (IBD) is associated with reduced antibacterial immunity and increased immune regulatory activity, but the source of increased immune regulation during episodes of disease activity is unclear. Myeloid-derived suppressor cells (MDSCs) are a cell type with a well-recognized role in limiting immune reactions. MDSC function in IBD and its relation to disease activity, however, remains unexplored. Here we show that patients with either ulcerative colitis (UC) or Crohn's disease (CD) have high peripheral blood levels of mononuclear MDSCs. Especially exacerbation of disease is associated with higher levels of mononuclear MDSC counts compared with those in remission of disease. Interestingly, chronic experimental colitis in mice coincides with increased MDCS mobilization. Thus, our results suggest that mononuclear MDCS are endogenous antagonists of immune system functionality in mucosal inflammation and the depression of antibacterial immunity associated with exacerbation of disease might involve increased activity of the MDSC compartment. PMID:25775229

  17. Nonrandom distribution and frequencies of genomic and EST-derived microsatellite markers in rice, wheat, and barley

    PubMed Central

    La Rota, Mauricio; Kantety, Ramesh V; Yu, Ju-Kyung; Sorrells, Mark E

    2005-01-01

    Background Earlier comparative maps between the genomes of rice (Oryza sativa L.), barley (Hordeum vulgare L.) and wheat (Triticum aestivum L.) were linkage maps based on cDNA-RFLP markers. The low number of polymorphic RFLP markers has limited the development of dense genetic maps in wheat and the number of available anchor points in comparative maps. Higher density comparative maps using PCR-based anchor markers are necessary to better estimate the conservation of colinearity among cereal genomes. The purposes of this study were to characterize the proportion of transcribed DNA sequences containing simple sequence repeats (SSR or microsatellites) by length and motif for wheat, barley and rice and to determine in-silico rice genome locations for primer sets developed for wheat and barley Expressed Sequence Tags. Results The proportions of SSR types (di-, tri-, tetra-, and penta-nucleotide repeats) and motifs varied with the length of the SSRs within and among the three species, with trinucleotide SSRs being the most frequent. Distributions of genomic microsatellites (gSSRs), EST-derived microsatellites (EST-SSRs), and transcribed regions in the contiguous sequence of rice chromosome 1 were highly correlated. More than 13,000 primer pairs were developed for use by the cereal research community as potential markers in wheat, barley and rice. Conclusion Trinucleotide SSRs were the most common type in each of the species; however, the relative proportions of SSR types and motifs differed among rice, wheat, and barley. Genomic microsatellites were found to be primarily located in gene-rich regions of the rice genome. Microsatellite markers derived from the use of non-redundant EST-SSRs are an economic and efficient alternative to RFLP for comparative mapping in cereals. PMID:15720707

  18. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  19. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  20. Determination of defect density of state distribution of amorphous silicon solar cells by temperature derivative capacitance-frequency measurement

    SciTech Connect

    Yang, Guangtao Swaaij, R. A. C. M. M. van; Dobrovolskiy, S.; Zeman, M.

    2014-01-21

    In this contribution, we demonstrate the application temperature dependent capacitance-frequency measurements (C-f) to n-i-p hydrogenated amorphous silicon (a-Si:H) solar cells that are forward-biased. By using a forward bias, the C-f measurement can detect the density of defect states in a particular energy range of the interface region. For this contribution, we have carried out this measurement method on n-i-p a-Si:H solar cells of which the intrinsic layer has been exposed to a H{sub 2}-plasma before p-type layer deposition. After this treatment, the open-circuit voltage and fill factor increased significantly, as well as the blue response of the solar cells as is concluded from external quantum efficiency. For single junction, n-i-p a-Si:H solar cells initial efficiency increased from 6.34% to 8.41%. This performance enhancement is believed to be mainly due to a reduction of the defect density in the i-p interface region after the H{sub 2}-plasma treatment. These results are confirmed by the C-f measurements. After H{sub 2}-plasma treatment, the defect density in the intrinsic layer near the i-p interface region is lower and peaks at an energy level deeper in the band gap. These C-f measurements therefore enable us to monitor changes in the defect density in the interface region as a result of a hydrogen plasma. The lower defect density at the i-p interface as detected by the C-f measurements is supported by dark current-voltage measurements, which indicate a lower carrier recombination rate.

  1. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

  2. Statistical errors in Monte Carlo estimates of systematic errors

    NASA Astrophysics Data System (ADS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  3. Derivation of the complete Gutenberg-Richter magnitude-frequency relation using the principle of scale invariance

    SciTech Connect

    Rundle, J.B. )

    1989-09-10

    The purpose of this paper is to show that the various observational parameters characterizing the statistical properties of earthquakes can be related to each other. The fundamental postulate which is used to obtain quantitative results is the idea that the physics of earthquake occurrence scales as a power law, similar to properties one often sees in critical phenomena. When the physics of earthquake occurrence is exactly scale invariant, {ital b}=1, and it can be shown as a consequence that earthquakes in any magnitude band {Delta}m cover the same area in unit time. This result therefore implies the existence of a universal'' covering interval {tau}{sub {ital T}}, which is here called the cycle interval.'' Using this idea, the complete Gutenberg-Richter relation is derived in terms of the fault area {ital S}{sub {ital T}}, which is available to events of any given size, the average stress drop {Delta}{sigma}{sub {ital T}} for events occurring on {ital S}{sub {tau}}, the interval {tau}{sub {ital T}} for events of stress drop {Delta}{sigma}{sub {ital T}} to cover an area {ital S}{sub {ital T}}, and the scaling exponent {alpha}, which is proportional to the {ital b} value. Observationally, the average recurrence time interval for great earthquakes, or perhaps equivalently, the recurrence interval for characteristic earthquakes on a fault segment, is a measure of the cycle interval {tau}{sub {ital T}}. The exponent {alpha} may depend on time, but scale invariance (self similarity) demands that {alpha}=1. It is shown in the appendix that the {ital A} value in the Gutenberg-Richter relation can be written in terms of {ital S}{sub {ital T}}, {tau}{sub {ital T}}, {Delta}{sigma}{sub {ital T}}, and the parameter {alpha}. The {ital b} value is either 1 or 1.5 (depending on the geometry of the fault zone) multiplied by {alpha}. {copyright} American Geophysical Union 1989

  4. A novel concept to derive iodine status of human populations from frequency distribution properties of a hair iodine concentration.

    PubMed

    Prejac, J; Višnjević, V; Drmić, S; Skalny, A A; Mimica, N; Momčilović, B

    2014-04-01

    Today, human iodine deficiency is next to iron the most common nutritional deficiency in developed European and underdeveloped third world countries, respectively. A current biological indicator of iodine status is urinary iodine that reflects the very recent iodine exposure, whereas some long term indicator of iodine status remains to be identified. We analyzed hair iodine in a prospective, observational, cross-sectional, and exploratory study involving 870 apparently healthy Croatians (270 men and 600 women). Hair iodine was analyzed with the inductively coupled plasma mass spectrometry (ICP MS). Population (n870) hair iodine (IH) respective median was 0.499μgg(-1) (0.482 and 0.508μgg(-1)) for men and women, respectively, suggesting no sex related difference. We studied the hair iodine uptake by the logistic sigmoid saturation curve of the median derivatives to assess iodine deficiency, adequacy and excess. We estimated the overt iodine deficiency to occur when hair iodine concentration is below 0.15μgg(-1). Then there was a saturation range interval of about 0.15-2.0μgg(-1) (r(2)=0.994). Eventually, the sigmoid curve became saturated at about 2.0μgg(-1) and upward, suggesting excessive iodine exposure. Hair appears to be a valuable and robust long term biological indicator tissue for assessing the iodine body status. We propose adequate iodine status to correspond with the hair iodine (IH) uptake saturation of 0.565-0.739μgg(-1) (55-65%). PMID:24629671

  5. Unified Analysis for Antenna Pointing and Structural Errors. Part 1. Review

    NASA Technical Reports Server (NTRS)

    Abichandani, K.

    1983-01-01

    A necessary step in the design of a high accuracy microwave antenna system is to establish the signal error budget due to structural, pointing, and environmental parameters. A unified approach in performing error budget analysis as applicable to ground-based microwave antennas of different size and operating frequency is discussed. Major error sources contributing to the resultant deviation in antenna boresighting in pointing and tracking modes and the derivation of the governing equations are presented. Two computer programs (SAMCON and EBAP) were developed in-house, including the antenna servo-control program, as valuable tools in the error budget determination. A list of possible errors giving their relative contributions and levels is presented.

  6. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  7. Constraining frequency-magnitude-area relationships for precipitation and flood discharges using radar-derived precipitation estimates: example applications in the Upper and Lower Colorado River Basins, USA

    NASA Astrophysics Data System (ADS)

    Orem, C. A.; Pelletier, J. D.

    2015-11-01

    Flood-envelope curves (FEC) are useful for constraining the upper limit of possible flood discharges within drainage basins in a particular hydroclimatic region. Their usefulness, however, is limited by their lack of a well-defined recurrence interval. In this study we use radar-derived precipitation estimates to develop an alternative to the FEC method, i.e. the frequency-magnitude-area-curve (FMAC) method, that incorporates recurrence intervals. The FMAC method is demonstrated in two well-studied U.S. drainage basins, i.e. the Upper and Lower Colorado River basins (UCRB and LCRB, respectively), using Stage III Next-Generation-Radar (NEXRAD) gridded products and the diffusion-wave flow-routing algorithm. The FMAC method can be applied worldwide using any radar-derived precipitation estimates. In the FMAC method, idealized basins of similar contributing area are grouped together for frequency-magnitude analysis of precipitation intensity. These data are then routed through the idealized drainage basins of different contributing areas, using contributing-area-specific estimates for channel slope and channel width. Our results show that FMACs of precipitation discharge are power-law functions of contributing area with an average exponent of 0.79 ± 0.07 for recurrence intervals from 10 to 500 years. We compare our FMACs to published FECs and find that for wet antecedent-moisture conditions, the 500-year FMAC of flood discharge in the UCRB is on par with the US FEC for contributing areas of ~ 102 to 103 km2. FMACs of flood discharge for the LCRB exceed the published FEC for the LCRB for contributing areas in the range of ~ 102 to 104 km2. The FMAC method retains the power of the FEC method for constraining flood hazards in basins that are ungauged or have short flood records, yet it has the added advantage that it includes recurrence interval information necessary for estimating event probabilities.

  8. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  9. Study of geopotential error models used in orbit determination error analysis

    NASA Technical Reports Server (NTRS)

    Yee, C.; Kelbel, D.; Lee, T.; Samii, M. V.; Mistretta, G. D.; Hart, R. C.

    1991-01-01

    The uncertainty in the geopotential model is currently one of the major error sources in the orbit determination of low-altitude Earth-orbiting spacecraft. The results of an investigation of different geopotential error models and modeling approaches currently used for operational orbit error analysis support at the Goddard Space Flight Center (GSFC) are presented, with emphasis placed on sequential orbit error analysis using a Kalman filtering algorithm. Several geopotential models, known as the Goddard Earth Models (GEMs), were developed and used at GSFC for orbit determination. The errors in the geopotential models arise from the truncation errors that result from the omission of higher order terms (omission errors) and the errors in the spherical harmonic coefficients themselves (commission errors). At GSFC, two error modeling approaches were operationally used to analyze the effects of geopotential uncertainties on the accuracy of spacecraft orbit determination - the lumped error modeling and uncorrelated error modeling. The lumped error modeling approach computes the orbit determination errors on the basis of either the calibrated standard deviations of a geopotential model's coefficients or the weighted difference between two independently derived geopotential models. The uncorrelated error modeling approach treats the errors in the individual spherical harmonic components as uncorrelated error sources and computes the aggregate effect using a combination of individual coefficient effects. This study assesses the reasonableness of the two error modeling approaches in terms of global error distribution characteristics and orbit error analysis results. Specifically, this study presents the global distribution of geopotential acceleration errors for several gravity error models and assesses the orbit determination errors resulting from these error models for three types of spacecraft - the Gamma Ray Observatory, the Ocean Topography Experiment, and the Cosmic

  10. Pre-Vaccination Frequencies of Th17 Cells Correlate with Vaccine-Induced T-Cell Responses to Survivin-Derived Peptide Epitopes

    PubMed Central

    Køllgaard, Tania; Ugurel-Becker, Selma; Idorn, Manja; Andersen, Mads Hald

    2015-01-01

    Various subsets of immune regulatory cells are suggested to influence the outcome of therapeutic antigen-specific anti-tumor vaccinations. We performed an exploratory analysis of a possible correlation of pre-vaccination Th17 cells, MDSCs, and Tregs with both vaccination-induced T-cell responses as well as clinical outcome in metastatic melanoma patients vaccinated with survivin-derived peptides. Notably, we observed dysfunctional Th1 and cytotoxic T cells, i.e. down-regulation of the CD3ζchain (p=0.001) and an impaired IFNγ-production (p=0.001) in patients compared to healthy donors, suggesting an altered activity of immune regulatory cells. Moreover, the frequencies of Th17 cells (p=0.03) and Tregs (p=0.02) were elevated as compared to healthy donors. IL-17-secreting CD4+ T cells displayed an impact on the immunological and clinical effects of vaccination: Patients characterized by high frequencies of Th17 cells at pre-vaccination were more likely to develop survivin-specific T-cell reactivity post-vaccination (p=0.03). Furthermore, the frequency of Th17 (p=0.09) and Th17/IFNγ+ (p=0.19) cells associated with patient survival after vaccination. In summary, our explorative, hypothesis-generating study demonstrated that immune regulatory cells, in particular Th17 cells, play a relevant role for generation of the vaccine-induced anti-tumor immunity in cancer patients, hence warranting further investigation to test for validity as predictive biomarkers. PMID:26176858

  11. Impact of harmonics on the interpolated DFT frequency estimator

    NASA Astrophysics Data System (ADS)

    Belega, Daniel; Petri, Dario; Dallet, Dominique

    2016-01-01

    The paper investigates the effect of the interference due to spectral leakage on the frequency estimates returned by the Interpolated Discrete Fourier Transform (IpDFT) method based on the Maximum Sidelobe Decay (MSD) windows when harmonically distorted sine-waves are analyzed. The expressions for the frequency estimation error due to both the image of the fundamental tone and harmonics, and the frequency estimator variance due to the combined effect of both the above disturbances and wideband noise are derived. The achieved expressions allow us to identify which harmonics significantly contribute to frequency estimation uncertainty. A new IpDFT-based procedure capable to compensate all the significant effects of harmonics on the frequency estimation accuracy is then proposed. The derived theoretical results are verified through computer simulations. Moreover, the accuracy of the proposed procedure is compared with those of other state-of-the-art frequency estimation methods by means of both computer simulations and experimental results.

  12. Wideband Doppler frequency shift measurement and direction ambiguity resolution using optical frequency shift and optical heterodyning.

    PubMed

    Lu, Bing; Pan, Wei; Zou, Xihua; Yan, Xianglei; Yan, Lianshan; Luo, Bin

    2015-05-15

    A photonic approach for both wideband Doppler frequency shift (DFS) measurement and direction ambiguity resolution is proposed and experimentally demonstrated. In the proposed approach, a light wave from a laser diode is split into two paths. In one path, the DFS information is converted into an optical sideband close to the optical carrier by using two cascaded electro-optic modulators, while in the other path, the optical carrier is up-shifted by a specific value (e.g., from several MHz to hundreds of MHz) using an optical-frequency shift module. Then the optical signals from the two paths are combined and detected by a low-speed photodetector (PD), generating a low-frequency electronic signal. Through a subtraction between the specific optical frequency shift and the measured frequency of the low-frequency signal, the value of DFS is estimated from the derived absolute value, and the direction ambiguity is resolved from the derived sign (i.e., + or -). In the proof-of-concept experiments, DFSs from -90 to 90 kHz are successfully estimated for microwave signals at 10, 15, and 20 GHz, where the estimation errors are lower than ±60  Hz. The estimation errors can be further reduced via the use of a more stable optical frequency shift module. PMID:26393729

  13. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403

  14. Assessment of Intensity-Duration-Frequency curves for the Eastern Mediterranean region derived from high-resolution satellite and radar rainfall estimates

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat; Peleg, Nadav; Mei, Yiwen; Anagnostou, Emmanouil N.

    2016-04-01

    Intensity-duration-frequency (IDF) curves are used in flood risk management and hydrological design studies to relate the characteristics of a rainfall event to the probability of its occurrence. The usual approach relies on long records of raingauge data providing accurate estimates of the IDF curves for a specific location, but whose representativeness decreases with distance. Radar rainfall estimates have recently been tested over the Eastern Mediterranean area, characterized by steep climatological gradients, showing that radar IDF curves generally lay within the raingauge confidence interval and that radar is able to identify the climatology of extremes. Recent availability of relatively long records (>15 years) of high resolution satellite rainfall information allows to explore the spatial distribution of extreme rainfall with increased detail over wide areas, thus providing new perspectives for the study of precipitation regimes and promising both practical and theoretical implications. This study aims to (i) identify IDF curves obtained from radar rainfall estimates and (ii) identify and assess IDF curves obtained from two high resolution satellite retrieval algorithms (CMORPH and PERSIANN) over the Eastern Mediterranean region. To do so, we derive IDF curves fitting a GEV distribution to the annual maxima series from 23 years (1990-2013) of carefully corrected data from a C-Band radar located in Israel (covering Mediterranean to arid climates) as well as from 15 years (1998-2014) of gauge-adjusted high-resolution CMORPH and 10 years (2003-2013) of gauge-adjusted high-resolution PERSIANN data. We present the obtained IDF curves and we compare the curves obtained from the satellite algorithms to the ones obtained from the radar during overlapping periods; this analysis will draw conclusions on the reliability of the two satellite datasets for deriving rainfall frequency analysis over the region and provide IDF corrections. We compare then the curves obtained

  15. Error field penetration and locking to the backward propagating wave

    SciTech Connect

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.

  16. Error field penetration and locking to the backward propagating wave

    DOE PAGESBeta

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies wr in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = wr/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including the effects ofmore » pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less

  17. Scattering of high-frequency P wavefield derived by dense Hi-net array observations in Japan and computer simulations of seismic wave propagations

    NASA Astrophysics Data System (ADS)

    Takemura, Shunsuke; Furumura, Takashi

    2013-04-01

    We studied the scattering properties of high-frequency seismic waves due to the distribution of small-scale velocity fluctuations in the crust and upper mantle beneath Japan based on an analysis of three-component short-period seismograms and comparison with finite difference method (FDM) simulation of seismic wave propagation using various stochastic random velocity fluctuation models. Using a large number of dense High-Sensitivity Seismograph network waveform data of 310 shallow crustal earthquakes, we examined the P-wave energy partition of transverse component (PEPT), which is caused by scattering of the seismic wave in heterogeneous structure, as a function of frequency and hypocentral distances. At distance of less than D = 150 km, the PEPT increases with increasing frequency and is approximately constant in the range of from D = 50 to 150 km. The PEPT was found to increase suddenly at a distance of over D = 150 km and was larger in the high-frequency band (f > 4 Hz). Therefore, strong scattering of P wave may occur around the propagation path (upper crust, lower crust and around Moho discontinuity) of the P-wave first arrival phase at distances of larger than D = 150 km. We also found a regional difference in the PEPT value, whereby the PEPT value is large at the backarc side of northeastern Japan compared with southwestern Japan and the forearc side of northeastern Japan. These PEPT results, which were derived from shallow earthquakes, indicate that the shallow structure of heterogeneity at the backarc side of northeastern Japan is stronger and more complex compared with other areas. These hypotheses, that is, the depth and regional change of small-scale velocity fluctuations, are examined by 3-D FDM simulation using various heterogeneous structure models. By comparing the observed feature of the PEPT with simulation results, we found that strong seismic wave scattering occurs in the lower crust due to relatively higher velocity and stronger heterogeneities

  18. Computing Instantaneous Frequency by normalizing Hilbert Transform

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  19. Computing Instantaneous Frequency by normalizing Hilbert Transform

    DOEpatents

    Huang, Norden E.

    2005-05-31

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  20. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  1. Quantifying uncertainty in morphologically-derived bedload transport rates for large braided rivers: insights from high-resolution, high-frequency digital elevation model differencing

    NASA Astrophysics Data System (ADS)

    Brasington, J.; Hicks, M.; Wheaton, J. M.; Williams, R. D.; Vericat, D.

    2013-12-01

    Repeat surveys of channel morphology provide a means to quantify fluvial sediment storage and enable inferences about changes in long-term sediment supply, watershed delivery and bed level adjustment; information vital to support effective river and land management. Over shorter time-scales, direct differencing of fluvial terrain models may also offer a route to predict reach-averaged sediment transport rates and quantify the patterns of channel morphodynamics and the processes that force them. Recent and rapid advances in geomatics have facilitated these goals by enabling the acquisition of topographic data at spatial resolutions and precisions suitable for characterising river morphology at the scale of individual grains over multi-kilometre reaches. Despite improvements in topographic surveying, inverting the terms of the sediment budget to derive estimates of sediment transport and link these to morphodynamic processes is, nonetheless, often confounded by limited knowledge of either the sediment supply or efflux across a boundary of the control volume, or unobserved cut-and-fill taking place between surveys. This latter problem is particularly poorly constrained, as field logistics frequently preclude surveys at a temporal frequency sufficient to capture changes in sediment storage associated with each competent event, let alone changes during individual floods. In this paper, we attempt to quantify the principal sources of uncertainty in morphologically-derived bedload transport rates for the large, labile, gravel-bed braided Rees River which drains the Southern Alps of NZ. During the austral summer of 2009-10, a unique timeseries of 10 high quality DEMs was derived for a 3 x 0.7 km reach of the Rees, using a combination of mobile terrestrial laser scanning, aDcp soundings and aerial image analysis. Complementary measurements of the forcing flood discharges and estimates of event-based particle step lengths were also acquired during the field campaign

  2. Coding for frequency hopped spread spectrum satellite communications

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Li, G.; Blake, I. F.; Bhargava, V. K.; Chen, Q.

    1992-04-01

    The performance of fast frequency hopped spread spectrum systems with M-ary frequency shift keying and error correction coding under jamming conditions are analyzed. Ratio threshold diversity combining is used. The decoding scheme is error erasure decoding with metrics generated by the diversity combiner. The bit error probability of the system is computed and improvements offered by error correction coding are shown. The performance of several error correction codes is compared under different channel conditions. The notion of an arbitrarily varying channel (AVC) is discussed, including capacities of AVC's and a discrete memoryless channel, and two Gaussian AVC models are described. Coded performance of a slow frequency hopped differential phase shift keying (DPSK) system in presence of both additive white Gaussian noise and tone jamming is studied. The error correlation due to DPSK demodulation and the effect of tone jamming are considered in evaluating the block and decoded error probabilities. The effect of interleaving on system performance is addressed. A nearly optimum code rate for a length 255 Reed-Solomon code is derived for systems employing interleaving. Finally, a parallel approach to the design of universal receivers for unknown and time-varying channels is applied to DPSK systems in the presence of noise and tone interference.

  3. Error and efficiency of simulated tempering simulations

    PubMed Central

    Rosta, Edina; Hummer, Gerhard

    2010-01-01

    We derive simple analytical expressions for the error and computational efficiency of simulated tempering (ST) simulations. The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. An extension to the multistate case is described. We show that the relative gain in efficiency of ST simulations over regular molecular dynamics (MD) or Monte Carlo (MC) simulations is given by the ratio of their reactive fluxes, i.e., the number of transitions between the two states summed over all ST temperatures divided by the number of transitions at the single temperature of the MD or MC simulation. This relation for the efficiency is derived for the limit in which changes in the ST temperature are fast compared to the two-state transitions. In this limit, ST is most efficient. Our expression for the maximum efficiency gain of ST simulations is essentially identical to the corresponding expression derived by us for replica exchange MD and MC simulations [E. Rosta and G. Hummer, J. Chem. Phys. 131, 165102 (2009)] on a different route. We find quantitative agreement between predicted and observed efficiency gains in a test against ST and replica exchange MC simulations of a two-dimensional Ising model. Based on the efficiency formula, we provide recommendations for the optimal choice of ST simulation parameters, in particular, the range and number of temperatures, and the frequency of attempted temperature changes. PMID:20095723

  4. Error and efficiency of simulated tempering simulations.

    PubMed

    Rosta, Edina; Hummer, Gerhard

    2010-01-21

    We derive simple analytical expressions for the error and computational efficiency of simulated tempering (ST) simulations. The theory applies to the important case of systems whose dynamics at long times is dominated by the slow interconversion between two metastable states. An extension to the multistate case is described. We show that the relative gain in efficiency of ST simulations over regular molecular dynamics (MD) or Monte Carlo (MC) simulations is given by the ratio of their reactive fluxes, i.e., the number of transitions between the two states summed over all ST temperatures divided by the number of transitions at the single temperature of the MD or MC simulation. This relation for the efficiency is derived for the limit in which changes in the ST temperature are fast compared to the two-state transitions. In this limit, ST is most efficient. Our expression for the maximum efficiency gain of ST simulations is essentially identical to the corresponding expression derived by us for replica exchange MD and MC simulations [E. Rosta and G. Hummer, J. Chem. Phys. 131, 165102 (2009)] on a different route. We find quantitative agreement between predicted and observed efficiency gains in a test against ST and replica exchange MC simulations of a two-dimensional Ising model. Based on the efficiency formula, we provide recommendations for the optimal choice of ST simulation parameters, in particular, the range and number of temperatures, and the frequency of attempted temperature changes. PMID:20095723

  5. Error analysis of quartz crystal resonator applications

    SciTech Connect

    Lucklum, R.; Behling, C.; Hauptmann, P.; Cernosek, R.W.; Martin, S.J.

    1996-12-31

    Quartz crystal resonators in chemical sensing applications are usually configured as the frequency determining element of an electrical oscillator. By contrast, the shear modulus determination of a polymer coating needs a complete impedance analysis. The first part of this contribution reports the error made if common approximations are used to relate the frequency shift to the sorbed mass. In the second part the authors discuss different error sources in the procedure to determine shear parameters.

  6. A Review of Errors in the Journal Abstract

    ERIC Educational Resources Information Center

    Lee, Eunpyo; Kim, Eun-Kyung

    2013-01-01

    (percentage) of abstracts that involved with errors, the most erroneous part of the abstract, and the types and frequency of errors. Also the purpose expanded to compare the results with those of the previous…

  7. Impact of Measurement Error on Synchrophasor Applications

    SciTech Connect

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.; Zhao, Jiecheng; Tan, Jin; Wu, Ling; Zhan, Lingwei

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  8. Quantitative prediction of radio frequency induced local heating derived from measured magnetic field maps in magnetic resonance imaging: A phantom validation at 7 T

    SciTech Connect

    Zhang, Xiaotong; Liu, Jiaen; Van de Moortele, Pierre-Francois; Schmitter, Sebastian; He, Bin

    2014-12-15

    Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate the feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.

  9. Temperature- and Frequency-Dependent Dielectric Properties of Sol-Gel-Derived BaTiO3-NaNbO3 Solid Solutions

    NASA Astrophysics Data System (ADS)

    Kwon, Do-Kyun; Goh, Yumin; Son, Dongsu; Kim, Baek-Hyun; Bae, Hyunjeong; Perini, Steve; Lanagan, Michael

    2016-01-01

    A sol-gel-derived powder synthesis method has been used to prepare BaTiO3-NaNbO3 (BT-NN) solid-solution ceramic samples with various compositions. Fine and homogeneous complex perovskite ceramics were obtained at lower processing temperatures than used in conventional solid-state processing. The ferroelectric and relaxor ferroelectric properties of the sol-gel-synthesized (1 - x)BaTiO3- xNaNbO3 [(1 - x)BT- xNN] ceramics in the wide composition range of 0 < x ≤ 0.7 were extensively studied. Structural and dielectric characterization results revealed that a low level of NN addition ( x = 0.04) to BT is sufficient to cause a continuous relaxor-to-ferroelectric transition, and the relaxor behavior was consistently observed at compositions with high NN content up to x = 0.7. A number of relaxor parameters including the Curie temperature, Burns temperature, freezing temperature, γ, diffuseness parameter ( δ), and activation energy were determined from the temperature and frequency dependency of the real part of the dielectric permittivity for various BT-NN compositions using the Curie-Weiss law and Vögel-Fulcher relationship. The systematic changes of these parameters with respect to composition indicate that a continuous crossover between BT-based relaxor and NN-based relaxor occurs at a composition near x = 0.4.

  10. Quantitative prediction of radio frequency induced local heating derived from measured magnetic field maps in magnetic resonance imaging: A phantom validation at 7 T

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaotong; Van de Moortele, Pierre-Francois; Liu, Jiaen; Schmitter, Sebastian; He, Bin

    2014-12-01

    Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate the feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.