Sample records for frequency error estimator

  1. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  2. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  3. Rapid estimation of frequency response functions by close-range photogrammetry

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1985-01-01

    The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.

  4. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  5. A new method of hybrid frequency hopping signals selection and blind parameter estimation

    NASA Astrophysics Data System (ADS)

    Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian

    2018-04-01

    Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.

  6. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    PubMed Central

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaut, Arkadiusz

    We present the results of the estimation of parameters with LISA for nearly monochromatic gravitational waves in the low and high frequency regimes for the time-delay interferometry response. Angular resolution of the detector and the estimation errors of the signal's parameters in the high frequency regimes are calculated as functions of the position in the sky and as functions of the frequency. For the long-wavelength domain we give compact formulas for the estimation errors valid on a wide range of the parameter space.

  8. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1991-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  9. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1990-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  10. Time synchronization of a frequency-hopped MFSK communication system

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Polydoros, A.; Huth, G. K.

    1981-01-01

    In a frequency-hopped (FH) multiple-frequency-shift-keyed (MFSK) communication system, frequency hopping causes the necessary frequency transitions for time synchronization estimation rather than the data sequence as in the conventional (nonfrequency-hopped) system. Making use of this observation, this paper presents a fine synchronization (i.e., time errors of less than a hop duration) technique for estimation of FH timing. The performance degradation due to imperfect FH time synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of hops used in the FH timing estimate.

  11. Pilot-Assisted Channel Estimation for Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.

  12. Methods for estimating magnitude and frequency of peak flows for natural streams in Utah

    USGS Publications Warehouse

    Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.

    2007-01-01

    Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.

  13. Do We Really Need Sinusoidal Surface Temperatures to Apply Heat Tracing Techniques to Estimate Streambed Fluid Fluxes?

    NASA Astrophysics Data System (ADS)

    Luce, C. H.; Tonina, D.; Applebee, R.; DeWeese, T.

    2017-12-01

    Two common refrains about using the one-dimensional advection diffusion equation to estimate fluid fluxes, thermal conductivity, or bed surface elevation from temperature time series in streambeds are that the solution assumes that 1) the surface boundary condition is a sine wave or nearly so, and 2) there is no gradient in mean temperature with depth. Concerns on these subjects are phrased in various ways, including non-stationarity in frequency, amplitude, or phase. Although the mathematical posing of the original solution to the problem might lead one to believe these constraints exist, the perception that they are a source of error is a fallacy. Here we re-derive the inverse solution of the 1-D advection-diffusion equation starting with an arbitrary surface boundary condition for temperature. In doing so, we demonstrate the frequency-independence of the solution, meaning any single frequency can be used in the frequency-domain solutions to estimate thermal diffusivity and 1-D fluid flux in streambeds, even if the forcing has multiple frequencies. This means that diurnal variations with asymmetric shapes, gradients in the mean temperature with depth, or `non-stationary' amplitude and frequency (or phase) do not actually represent violations of assumptions, and they should not cause errors in estimates when using one of the suite of existing solution methods derived based on a single frequency. Misattribution of errors to these issues constrains progress on solving real sources of error. Numerical and physical experiments are used to verify this conclusion and consider the utility of information at `non-standard' frequencies and multiple frequencies to augment the information derived from time series of temperature.

  14. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  15. A frequency-domain estimator for use in adaptive control systems

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard O.; Valavani, Lena; Athans, Michael; Stein, Gunter

    1991-01-01

    This paper presents a frequency-domain estimator that can identify both a parametrized nominal model of a plant as well as a frequency-domain bounding function on the modeling error associated with this nominal model. This estimator, which we call a robust estimator, can be used in conjunction with a robust control-law redesign algorithm to form a robust adaptive controller.

  16. RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.

  17. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  18. Tracking Architecture Based on Dual-Filter with State Feedback and Its Application in Ultra-Tight GPS/INS Integration

    PubMed Central

    Zhang, Xi; Miao, Lingjuan; Shao, Haijun

    2016-01-01

    If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper. PMID:27144570

  19. Tracking Architecture Based on Dual-Filter with State Feedback and Its Application in Ultra-Tight GPS/INS Integration.

    PubMed

    Zhang, Xi; Miao, Lingjuan; Shao, Haijun

    2016-05-02

    If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper.

  20. New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction

    NASA Astrophysics Data System (ADS)

    Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.

    2017-12-01

    Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.

  1. Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.

    PubMed

    Li, Jielin; Hassebrook, Laurence G; Guan, Chun

    2003-01-01

    Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.

  2. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.

    PubMed

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-11-18

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.

  3. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers

    PubMed Central

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-01-01

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration—which are the basis of tracking error estimation—are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (−0.25 cycle, 0.25 cycle) to (−0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz. PMID:29156581

  4. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  5. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  6. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  7. Frequency synchronization of a frequency-hopped MFSK communication system

    NASA Technical Reports Server (NTRS)

    Huth, G. K.; Polydoros, A.; Simon, M. K.

    1981-01-01

    This paper presents the performance of fine-frequency synchronization. The performance degradation due to imperfect frequency synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of frequency hops used in the estimator. The effect of imperfect fine-time synchronization is also included in the calculation of fine-frequency synchronization performance to obtain the overall performance degradation due to synchronization errors.

  8. LS Channel Estimation and Signal Separation for UHF RFID Tag Collision Recovery on the Physical Layer.

    PubMed

    Duan, Hanjun; Wu, Haifeng; Zeng, Yu; Chen, Yuebin

    2016-03-26

    In a passive ultra-high frequency (UHF) radio-frequency identification (RFID) system, tag collision is generally resolved on a medium access control (MAC) layer. However, some of collided tag signals could be recovered on a physical (PHY) layer and, thus, enhance the identification efficiency of the RFID system. For the recovery on the PHY layer, channel estimation is a critical issue. Good channel estimation will help to recover the collided signals. Existing channel estimates work well for two collided tags. When the number of collided tags is beyond two, however, the existing estimates have more estimation errors. In this paper, we propose a novel channel estimate for the UHF RFID system. It adopts an orthogonal matrix based on the information of preambles which is known for a reader and applies a minimum-mean-square-error (MMSE) criterion to estimate channels. From the estimated channel, we could accurately separate the collided signals and recover them. By means of numerical results, we show that the proposed estimate has lower estimation errors and higher separation efficiency than the existing estimates.

  9. Circular Probable Error for Circular and Noncircular Gaussian Impacts

    DTIC Science & Technology

    2012-09-01

    1M simulated impacts ph(k)=mean(imp(:,1).^2+imp(:,2).^2<=CEP^2); % hit frequency on CEP end phit (j)=mean(ph...avg 100 hit frequencies to “incr n” end % GRAPHICS plot(i, phit ,’r-’); % error exponent versus Ph estimate

  10. The Significance of the Record Length in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Senarath, S. U.

    2013-12-01

    Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.

  11. Estimation of the auto frequency response function at unexcited points using dummy masses

    NASA Astrophysics Data System (ADS)

    Hosoya, Naoki; Yaginuma, Shinji; Onodera, Hiroshi; Yoshimura, Takuya

    2015-02-01

    If structures with complex shapes have space limitations, vibration tests using an exciter or impact hammer for the excitation are difficult. Although measuring the auto frequency response function at an unexcited point may not be practical via a vibration test, it can be obtained by assuming that the inertia acting on a dummy mass is an external force on the target structure upon exciting a different excitation point. We propose a method to estimate the auto frequency response functions at unexcited points by attaching a small mass (dummy mass), which is comparable to the accelerometer mass. The validity of the proposed method is demonstrated by comparing the auto frequency response functions estimated at unexcited points in a beam structure to those obtained from numerical simulations. We also consider random measurement errors by finite element analysis and vibration tests, but not bias errors. Additionally, the applicability of the proposed method is demonstrated by applying it to estimate the auto frequency response function of the lower arm in a car suspension.

  12. Spectrum-averaged Harmonic Path (SHAPA) algorithm for non-contact vital sign monitoring with ultra-wideband (UWB) radar.

    PubMed

    Van Nguyen; Javaid, Abdul Q; Weitnauer, Mary Ann

    2014-01-01

    We introduce the Spectrum-averaged Harmonic Path (SHAPA) algorithm for estimation of heart rate (HR) and respiration rate (RR) with Impulse Radio Ultrawideband (IR-UWB) radar. Periodic movement of human torso caused by respiration and heart beat induces fundamental frequencies and their harmonics at the respiration and heart rates. IR-UWB enables capture of these spectral components and frequency domain processing enables a low cost implementation. Most existing methods of identifying the fundamental component either in frequency or time domain to estimate the HR and/or RR lead to significant error if the fundamental is distorted or cancelled by interference. The SHAPA algorithm (1) takes advantage of the HR harmonics, where there is less interference, and (2) exploits the information in previous spectra to achieve more reliable and robust estimation of the fundamental frequency in the spectrum under consideration. Example experimental results for HR estimation demonstrate how our algorithm eliminates errors caused by interference and produces 16% to 60% more valid estimates.

  13. Repeat-aware modeling and correction of short read errors.

    PubMed

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.

  14. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  15. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  16. High-frequency signal and noise estimates of CSR GRACE RL04

    NASA Astrophysics Data System (ADS)

    Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.

    2012-12-01

    A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.

  17. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  18. Direction Dependent Effects In Widefield Wideband Full Stokes Radio Imaging

    NASA Astrophysics Data System (ADS)

    Jagannathan, Preshanth; Bhatnagar, Sanjay; Rau, Urvashi; Taylor, Russ

    2015-01-01

    Synthesis imaging in radio astronomy is affected by instrumental and atmospheric effects which introduce direction dependent gains.The antenna power pattern varies both as a function of time and frequency. The broad band time varying nature of the antenna power pattern when not corrected leads to gross errors in full stokes imaging and flux estimation. In this poster we explore the errors that arise in image deconvolution while not accounting for the time and frequency dependence of the antenna power pattern. Simulations were conducted with the wideband full stokes power pattern of the Very Large Array(VLA) antennas to demonstrate the level of errors arising from direction-dependent gains. Our estimate is that these errors will be significant in wide-band full-pol mosaic imaging as well and algorithms to correct these errors will be crucial for many up-coming large area surveys (e.g. VLASS)

  19. Time-Frequency Distribution of Seismocardiographic Signals: A Comparative Study

    PubMed Central

    Taebi, Amirtaha; Mansy, Hansen A.

    2017-01-01

    Accurate estimation of seismocardiographic (SCG) signal features can help successful signal characterization and classification in health and disease. This may lead to new methods for diagnosing and monitoring heart function. Time-frequency distributions (TFD) were often used to estimate the spectrotemporal signal features. In this study, the performance of different TFDs (e.g., short-time Fourier transform (STFT), polynomial chirplet transform (PCT), and continuous wavelet transform (CWT) with different mother functions) was assessed using simulated signals, and then utilized to analyze actual SCGs. The instantaneous frequency (IF) was determined from TFD and the error in estimating IF was calculated for simulated signals. Results suggested that the lowest IF error depended on the TFD and the test signal. STFT had lower error than CWT methods for most test signals. For a simulated SCG, Morlet CWT more accurately estimated IF than other CWTs, but Morlet did not provide noticeable advantages over STFT or PCT. PCT had the most consistently accurate IF estimations and appeared more suited for estimating IF of actual SCG signals. PCT analysis showed that actual SCGs from eight healthy subjects had multiple spectral peaks at 9.20 ± 0.48, 25.84 ± 0.77, 50.71 ± 1.83 Hz (mean ± SEM). These may prove useful features for SCG characterization and classification. PMID:28952511

  20. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  1. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  2. Testing boundary conditions for the conjunction fallacy: effects of response mode, conceptual focus, and problem type.

    PubMed

    Wedell, Douglas H; Moro, Rodrigo

    2008-04-01

    Two experiments used within-subject designs to examine how conjunction errors depend on the use of (1) choice versus estimation tasks, (2) probability versus frequency language, and (3) conjunctions of two likely events versus conjunctions of likely and unlikely events. All problems included a three-option format verified to minimize misinterpretation of the base event. In both experiments, conjunction errors were reduced when likely events were conjoined. Conjunction errors were also reduced for estimations compared with choices, with this reduction greater for likely conjuncts, an interaction effect. Shifting conceptual focus from probabilities to frequencies did not affect conjunction error rates. Analyses of numerical estimates for a subset of the problems provided support for the use of three general models by participants for generating estimates. Strikingly, the order in which the two tasks were carried out did not affect the pattern of results, supporting the idea that the mode of responding strongly determines the mode of thinking about conjunctions and hence the occurrence of the conjunction fallacy. These findings were evaluated in terms of implications for rationality of human judgment and reasoning.

  3. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  4. Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS

    NASA Astrophysics Data System (ADS)

    Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin

    2015-08-01

    Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.

  5. Microwave Photonic Architecture for Direction Finding of LPI Emitters: Post-Processing for Angle of Arrival Estimation

    DTIC Science & Technology

    2016-09-01

    mean- square (RMS) error of 0.29° at ə° resolution. For a P4 coded signal, the RMS error in estimating the AOA is 0.32° at 1° resolution. 14...FMCW signal, it was demonstrated that the system is capable of estimating the AOA with a root-mean- square (RMS) error of 0.29° at ə° resolution. For a...Modulator PCB printed circuit board PD photodetector RF radio frequency RMS root-mean- square xvi THIS PAGE INTENTIONALLY LEFT BLANK xvii

  6. Influence of tire dynamics on slip ratio estimation of independent driving wheel system

    NASA Astrophysics Data System (ADS)

    Li, Jianqiu; Song, Ziyou; Wei, Yintao; Ouyang, Minggao

    2014-11-01

    The independent driving wheel system, which is composed of in-wheel permanent magnet synchronous motor(I-PMSM) and tire, is more convenient to estimate the slip ratio because the rotary speed of the rotor can be accurately measured. However, the ring speed of the tire ring doesn't equal to the rotor speed considering the tire deformation. For this reason, a deformable tire and a detailed I-PMSM are modeled by using Matlab/Simulink. Moreover, the tire/road contact interface(a slippery road) is accurately described by the non-linear relaxation length-based model and the Magic Formula pragmatic model. Based on the relatively accurate model, the error of slip ratio estimated by the rotor rotary speed is analyzed in both time and frequency domains when a quarter car is started by the I-PMSM with a definite target torque input curve. In addition, the natural frequencies(NFs) of the driving wheel system with variable parameters are illustrated to present the relationship between the slip ratio estimation error and the NF. According to this relationship, a low-pass filter, whose cut-off frequency corresponds to the NF, is proposed to eliminate the error in the estimated slip ratio. The analysis, concerning the effect of the driving wheel parameters and road conditions on slip ratio estimation, shows that the peak estimation error can be reduced up to 75% when the LPF is adopted. The robustness and effectiveness of the LPF are therefore validated. This paper builds up the deformable tire model and the detailed I-PMSM models, and analyzes the effect of the driving wheel parameters and road conditions on slip ratio estimation.

  7. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  8. Optimization of spatial frequency domain imaging technique for estimating optical properties of food and biological materials

    USDA-ARS?s Scientific Manuscript database

    Spatial frequency domain imaging technique has recently been developed for determination of the optical properties of food and biological materials. However, accurate estimation of the optical property parameters by the technique is challenging due to measurement errors associated with signal acquis...

  9. Estimation of flood-frequency characteristics of small urban streams in North Carolina

    USGS Publications Warehouse

    Robbins, J.C.; Pope, B.F.

    1996-01-01

    A statewide study was conducted to develop methods for estimating the magnitude and frequency of floods of small urban streams in North Carolina. This type of information is critical in the design of bridges, culverts and water-control structures, establishment of flood-insurance rates and flood-plain regulation, and for other uses by urban planners and engineers. Concurrent records of rainfall and runoff data collected in small urban basins were used to calibrate rainfall-runoff models. Historic rain- fall records were used with the calibrated models to synthesize a long- term record of annual peak discharges. The synthesized record of annual peak discharges were used in a statistical analysis to determine flood- frequency distributions. These frequency distributions were used with distributions from previous investigations to develop a database for 32 small urban basins in the Blue Ridge-Piedmont, Sand Hills, and Coastal Plain hydrologic areas. The study basins ranged in size from 0.04 to 41.0 square miles. Data describing the size and shape of the basin, level of urban development, and climate and rural flood charac- teristics also were included in the database. Estimation equations were developed by relating flood-frequency char- acteristics to basin characteristics in a generalized least-squares regression analysis. The most significant basin characteristics are drainage area, impervious area, and rural flood discharge. The model error and prediction errors for the estimating equations were less than those for the national flood-frequency equations previously reported. Resulting equations, which have prediction errors generally less than 40 percent, can be used to estimate flood-peak discharges for 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals for small urban basins across the State assuming negligible, sustainable, in- channel detention or basin storage.

  10. Point counts from clustered populations: Lessons from an experiment with Hawaiian crows

    USGS Publications Warehouse

    Hayward, G.D.; Kepler, C.B.; Scott, J.M.

    1991-01-01

    We designed an experiment to identify factors contributing most to error in counts of Hawaiian Crow or Alala (Corvus hawaiiensis) groups that are detected aurally. Seven observers failed to detect calling Alala on 197 of 361 3-min point counts on four transects extending from cages with captive Alala. A detection curve describing the relation between frequency of flock detection and distance typified the distribution expected in transect or point counts. Failure to detect calling Alala was affected most by distance, observer, and Alala calling frequency. The number of individual Alala calling was not important in detection rate. Estimates of the number of Alala calling (flock size) were biased and imprecise: average difference between number of Alala calling and number heard was 3.24 (.+-. 0.277). Distance, observer, number of Alala calling, and Alala calling frequency all contributed to errors in estimates of group size (P < 0.0001). Multiple regression suggested that number of Alala calling contributed most to errors. These results suggest that well-designed point counts may be used to estimate the number of Alala flocks but cast doubt on attempts to estimate flock size when individuals are counted aurally.

  11. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  12. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  13. Airplane wing vibrations due to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Pastel, R. L.; Caruthers, J. E.; Frost, W.

    1981-01-01

    The magnitude of error introduced due to wing vibration when measuring atmospheric turbulence with a wind probe mounted at the wing tip was studied. It was also determined whether accelerometers mounted on the wing tip are needed to correct this error. A spectrum analysis approach is used to determine the error. Estimates of the B-57 wing characteristics are used to simulate the airplane wing, and von Karman's cross spectrum function is used to simulate atmospheric turbulence. It was found that wing vibration introduces large error in measured spectra of turbulence in the frequency's range close to the natural frequencies of the wing.

  14. Parameter Estimation for GRACE-FO Geometric Ranging Errors

    NASA Astrophysics Data System (ADS)

    Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.

    2017-12-01

    Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.

  15. Was That Assumption Necessary? Reconsidering Boundary Conditions for Analytical Solutions to Estimate Streambed Fluxes

    NASA Astrophysics Data System (ADS)

    Luce, Charles H.; Tonina, Daniele; Applebee, Ralph; DeWeese, Timothy

    2017-11-01

    Two common refrains about using the one-dimensional advection diffusion equation to estimate fluid fluxes and thermal conductivity from temperature time series in streambeds are that the solution assumes that (1) the surface boundary condition is a sine wave or nearly so, and (2) there is no gradient in mean temperature with depth. Although the mathematical posing of the problem in the original solution to the problem might lead one to believe these constraints exist, the perception that they are a source of error is a fallacy. Here we develop a mathematical proof demonstrating the equivalence of the solution as developed based on an arbitrary (Fourier integral) surface temperature forcing when evaluated at a single given frequency versus that derived considering a single frequency from the beginning. The implication is that any single frequency can be used in the frequency-domain solutions to estimate thermal diffusivity and 1-D fluid flux in streambeds, even if the forcing has multiple frequencies. This means that diurnal variations with asymmetric shapes or gradients in the mean temperature with depth are not actually assumptions, and deviations from them should not cause errors in estimates. Given this clarification, we further explore the potential for using information at multiple frequencies to augment the information derived from time series of temperature.

  16. 3D Tendon Strain Estimation Using High-frequency Volumetric Ultrasound Images: A Feasibility Study.

    PubMed

    Carvalho, Catarina; Slagmolen, Pieter; Bogaerts, Stijn; Scheys, Lennart; D'hooge, Jan; Peers, Koen; Maes, Frederik; Suetens, Paul

    2018-03-01

    Estimation of strain in tendons for tendinopathy assessment is a hot topic within the sports medicine community. It is believed that, if accurately estimated, existing treatment and rehabilitation protocols can be improved and presymptomatic abnormalities can be detected earlier. State-of-the-art studies present inaccurate and highly variable strain estimates, leaving this problem without solution. Out-of-plane motion, present when acquiring two-dimensional (2D) ultrasound (US) images, is a known problem and may be responsible for such errors. This work investigates the benefit of high-frequency, three-dimensional (3D) US imaging to reduce errors in tendon strain estimation. Volumetric US images were acquired in silico, in vitro, and ex vivo using an innovative acquisition approach that combines the acquisition of 2D high-frequency US images with a mechanical guided system. An affine image registration method was used to estimate global strain. 3D strain estimates were then compared with ground-truth values and with 2D strain estimates. The obtained results for in silico data showed a mean absolute error (MAE) of 0.07%, 0.05%, and 0.27% for 3D estimates along axial, lateral direction, and elevation direction and a respective MAE of 0.21% and 0.29% for 2D strain estimates. Although 3D could outperform 2D, this does not occur in in vitro and ex vivo settings, likely due to 3D acquisition artifacts. Comparison against the state-of-the-art methods showed competitive results. The proposed work shows that 3D strain estimates are more accurate than 2D estimates but acquisition of appropriate 3D US images remains a challenge.

  17. A Feasibility Study for Simultaneous Measurements of Water Vapor and Precipitation Parameters using a Three-frequency Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Liao, L.; Tian, L.

    2005-01-01

    The radar return powers from a three-frequency radar, with center frequency at 22.235 GHz and upper and lower frequencies chosen with equal water vapor absorption coefficients, can be used to estimate water vapor density and parameters of the precipitation. A linear combination of differential measurements between the center and lower frequencies on one hand and the upper and lower frequencies on the other provide an estimate of differential water vapor absorption. The coupling between the precipitation and water vapor estimates is generally weak but increases with bandwidth and the amount of non-Rayleigh scattering of the hydrometeors. The coupling leads to biases in the estimates of water vapor absorption that are related primarily to the phase state and the median mass diameter of the hydrometeors. For a down-looking radar, path-averaged estimates of water vapor absorption are possible under rain-free as well as raining conditions by using the surface returns at the three frequencies. Simulations of the water vapor attenuation retrieval show that the largest source of error typically arises from the variance in the measured radar return powers. Although the error can be mitigated by a combination of a high pulse repetition frequency, pulse compression, and averaging in range and time, the radar receiver must be stable over the averaging period. For fractional bandwidths of 20% or less, the potential exists for simultaneous measurements at the three frequencies with a single antenna and transceiver, thereby significantly reducing the cost and mass of the system.

  18. Complex phase error and motion estimation in synthetic aperture radar imaging

    NASA Astrophysics Data System (ADS)

    Soumekh, M.; Yang, H.

    1991-06-01

    Attention is given to a SAR wave equation-based system model that accurately represents the interaction of the impinging radar signal with the target to be imaged. The model is used to estimate the complex phase error across the synthesized aperture from the measured corrupted SAR data by combining the two wave equation models governing the collected SAR data at two temporal frequencies of the radar signal. The SAR system model shows that the motion of an object in a static scene results in coupled Doppler shifts in both the temporal frequency domain and the spatial frequency domain of the synthetic aperture. The velocity of the moving object is estimated through these two Doppler shifts. It is shown that once the dynamic target's velocity is known, its reconstruction can be formulated via a squint-mode SAR geometry with parameters that depend upon the dynamic target's velocity.

  19. Alternative Regression Equations for Estimation of Annual Peak-Streamflow Frequency for Undeveloped Watersheds in Texas using PRESS Minimization

    USGS Publications Warehouse

    Asquith, William H.; Thompson, David B.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.

  20. Methods for estimating the magnitude and frequency of peak streamflows at ungaged sites in and near the Oklahoma Panhandle

    USGS Publications Warehouse

    Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.

    2015-09-28

    Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.

  1. Short-term prediction of rain attenuation level and volatility in Earth-to-Satellite links at EHF band

    NASA Astrophysics Data System (ADS)

    de Montera, L.; Mallet, C.; Barthès, L.; Golé, P.

    2008-08-01

    This paper shows how nonlinear models originally developed in the finance field can be used to predict rain attenuation level and volatility in Earth-to-Satellite links operating at the Extremely High Frequencies band (EHF, 20 50 GHz). A common approach to solving this problem is to consider that the prediction error corresponds only to scintillations, whose variance is assumed to be constant. Nevertheless, this assumption does not seem to be realistic because of the heteroscedasticity of error time series: the variance of the prediction error is found to be time-varying and has to be modeled. Since rain attenuation time series behave similarly to certain stocks or foreign exchange rates, a switching ARIMA/GARCH model was implemented. The originality of this model is that not only the attenuation level, but also the error conditional distribution are predicted. It allows an accurate upper-bound of the future attenuation to be estimated in real time that minimizes the cost of Fade Mitigation Techniques (FMT) and therefore enables the communication system to reach a high percentage of availability. The performance of the switching ARIMA/GARCH model was estimated using a measurement database of the Olympus satellite 20/30 GHz beacons and this model is shown to outperform significantly other existing models. The model also includes frequency scaling from the downlink frequency to the uplink frequency. The attenuation effects (gases, clouds and rain) are first separated with a neural network and then scaled using specific scaling factors. As to the resulting uplink prediction error, the error contribution of the frequency scaling step is shown to be larger than that of the downlink prediction, indicating that further study should focus on improving the accuracy of the scaling factor.

  2. Aniseikonia quantification: error rate of rule of thumb estimation.

    PubMed

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  3. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  4. Theoretical and simulated performance for a novel frequency estimation technique

    NASA Technical Reports Server (NTRS)

    Crozier, Stewart N.

    1993-01-01

    A low complexity, open-loop, discrete-time, delay-multiply-average (DMA) technique for estimating the frequency offset for digitally modulated MPSK signals is investigated. A nonlinearity is used to remove the MPSK modulation and generate the carrier component to be extracted. Theoretical and simulated performance results are presented and compared to the Cramer-Rao lower bound (CRLB) for the variance of the frequency estimation error. For all signal-to-noise ratios (SNR's) above threshold, it is shown that the CRLB can essentially be achieved with linear complexity.

  5. Approaches to stream solute load estimation for solutes with varying dynamics from five diverse small watershed

    USGS Publications Warehouse

    Aulenbach, Brent T.; Burns, Douglas A.; Shanley, James B.; Yanai, Ruth D.; Bae, Kikang; Wild, Adam; Yang, Yang; Yi, Dong

    2016-01-01

    Estimating streamwater solute loads is a central objective of many water-quality monitoring and research studies, as loads are used to compare with atmospheric inputs, to infer biogeochemical processes, and to assess whether water quality is improving or degrading. In this study, we evaluate loads and associated errors to determine the best load estimation technique among three methods (a period-weighted approach, the regression-model method, and the composite method) based on a solute's concentration dynamics and sampling frequency. We evaluated a broad range of varying concentration dynamics with stream flow and season using four dissolved solutes (sulfate, silica, nitrate, and dissolved organic carbon) at five diverse small watersheds (Sleepers River Research Watershed, VT; Hubbard Brook Experimental Forest, NH; Biscuit Brook Watershed, NY; Panola Mountain Research Watershed, GA; and Río Mameyes Watershed, PR) with fairly high-frequency sampling during a 10- to 11-yr period. Data sets with three different sampling frequencies were derived from the full data set at each site (weekly plus storm/snowmelt events, weekly, and monthly) and errors in loads were assessed for the study period, annually, and monthly. For solutes that had a moderate to strong concentration–discharge relation, the composite method performed best, unless the autocorrelation of the model residuals was <0.2, in which case the regression-model method was most appropriate. For solutes that had a nonexistent or weak concentration–discharge relation (modelR2 < about 0.3), the period-weighted approach was most appropriate. The lowest errors in loads were achieved for solutes with the strongest concentration–discharge relations. Sample and regression model diagnostics could be used to approximate overall accuracies and annual precisions. For the period-weighed approach, errors were lower when the variance in concentrations was lower, the degree of autocorrelation in the concentrations was higher, and sampling frequency was higher. The period-weighted approach was most sensitive to sampling frequency. For the regression-model and composite methods, errors were lower when the variance in model residuals was lower. For the composite method, errors were lower when the autocorrelation in the residuals was higher. Guidelines to determine the best load estimation method based on solute concentration–discharge dynamics and diagnostics are presented, and should be applicable to other studies.

  6. Submillimeter, millimeter, and microwave spectral line catalogue

    NASA Technical Reports Server (NTRS)

    Poynter, R. L.; Pickett, H. M.

    1980-01-01

    A computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between O and 3000 GHz (such as; wavelengths longer than 100 m) is discussed. The catalogue was used as a planning guide and as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances.

  7. Evaluation of Approaches to Deal with Low-Frequency Nuisance Covariates in Population Pharmacokinetic Analyses.

    PubMed

    Lagishetty, Chakradhar V; Duffull, Stephen B

    2015-11-01

    Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.

  8. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  9. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  10. Improved calibration technique for in vivo proton MRS thermometry for brain temperature measurement.

    PubMed

    Zhu, M; Bashir, A; Ackerman, J J; Yablonskiy, D A

    2008-09-01

    The most common MR-based approach to noninvasively measure brain temperature relies on the linear relationship between the (1)H MR resonance frequency of tissue water and the tissue's temperature. Herein we provide the most accurate in vivo assessment existing thus far of such a relationship. It was derived by acquiring in vivo MR spectra from a rat brain using a high field (11.74 Tesla [T]) MRI scanner and a single-voxel MR spectroscopy technique based on a LASER pulse sequence. Data were analyzed using three different methods to estimate the (1)H resonance frequencies of water and the metabolites NAA, Cho, and Cr, which are used as temperature-independent internal (frequency) references. Standard modeling of frequency-domain data as composed of resonances characterized by Lorentzian line shapes gave the tightest resonance-frequency versus temperature correlation. An analysis of the uncertainty in temperature estimation has shown that the major limiting factor is an error in estimating the metabolite frequency. For example, for a metabolite resonance linewidth of 8 Hz, signal sampling rate of 2 Hz and SNR of 5, an accuracy of approximately 0.5 degrees C can be achieved at a magnetic field of 3T. For comparison, in the current study conducted at 11.74T, the temperature estimation error was approximately 0.1 degrees C.

  11. Interpolating Spherical Harmonics for Computing Antenna Patterns

    DTIC Science & Technology

    2011-07-01

    4∞. If gNF denotes the spline computed from the uniform partition of NF + 1 frequency points, the splines converge as O[N−4F ]: ‖gN − g‖∞ ≤ C0‖g(4...splines. There is the possibility of estimating the error ‖g− gNF ‖∞ even though the function g is unknown. Table 1 compares these unknown errors ‖g − gNF ...to the computable estimates ‖ gNF − g2NF ‖∞. The latter is a strong predictor of the unknown error. The triple bar is the sup-norm error over all the

  12. Radar modulation classification using time-frequency representation and nonlinear regression

    NASA Astrophysics Data System (ADS)

    De Luigi, Christophe; Arques, Pierre-Yves; Lopez, Jean-Marc; Moreau, Eric

    1999-09-01

    In naval electronic environment, pulses emitted by radars are collected by ESM receivers. For most of them the intrapulse signal is modulated by a particular law. To help the classical identification process, a classification and estimation of this modulation law is applied on the intrapulse signal measurements. To estimate with a good accuracy the time-varying frequency of a signal corrupted by an additive noise, one method has been chosen. This method consists on the Wigner distribution calculation, the instantaneous frequency is then estimated by the peak location of the distribution. Bias and variance of the estimator are performed by computed simulations. In a estimated sequence of frequencies, we assume the presence of false and good estimated ones, the hypothesis of Gaussian distribution is made on the errors. A robust non linear regression method, based on the Levenberg-Marquardt algorithm, is thus applied on these estimated frequencies using a Maximum Likelihood Estimator. The performances of the method are tested by using varied modulation laws and different signal to noise ratios.

  13. Multielevation calibration of frequency-domain electromagnetic data

    USGS Publications Warehouse

    Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.

    2014-01-01

    Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.

  14. Effects of Multipath and Oversampling on Navigation Using Orthogonal Frequency Division Multiplexed Signals of Opportunity

    DTIC Science & Technology

    2008-03-01

    for military use. The L2 carrier frequency operates at 1227.6 MHz and transmits only the precise code . Each satellite transmits a unique pseudo ...random noise (PRN) code by which it is identified. GPS receivers require a LOS to four satellite signals to accurately estimate a position in three...receiver frequency errors, noise addition, and multipath ef- fects. He also developed four methods for estimating the cross- correlation peak within a sampled

  15. Errors in the estimation of approximate entropy and other recurrence-plot-derived indices due to the finite resolution of RR time series.

    PubMed

    García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan

    2009-02-01

    An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.

  16. Theoretical investigation on the mass loss impact on asteroseismic grid-based estimates of mass, radius, and age for RGB stars

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2018-01-01

    Aims: We aim to perform a theoretical evaluation of the impact of the mass loss indetermination on asteroseismic grid based estimates of masses, radii, and ages of stars in the red giant branch (RGB) phase. Methods: We adopted the SCEPtER pipeline on a grid spanning the mass range [0.8; 1.8] M⊙. As observational constraints, we adopted the star effective temperatures, the metallicity [Fe/H], the average large frequency spacing Δν, and the frequency of maximum oscillation power νmax. The mass loss was modelled following a Reimers parametrization with the two different efficiencies η = 0.4 and η = 0.8. Results: In the RGB phase, the average random relative error (owing only to observational uncertainty) on mass and age estimates is about 8% and 30% respectively. The bias in mass and age estimates caused by the adoption of a wrong mass loss parameter in the recovery is minor for the vast majority of the RGB evolution. The biases get larger only after the RGB bump. In the last 2.5% of the RGB lifetime the error on the mass determination reaches 6.5% becoming larger than the random error component in this evolutionary phase. The error on the age estimate amounts to 9%, that is, equal to the random error uncertainty. These results are independent of the stellar metallicity [Fe/H] in the explored range. Conclusions: Asteroseismic-based estimates of stellar mass, radius, and age in the RGB phase can be considered mass loss independent within the range (η ∈ [0.0,0.8]) as long as the target is in an evolutionary phase preceding the RGB bump.

  17. Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks

    DTIC Science & Technology

    2016-04-01

    Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard

  18. Single-Frequency GPS Relative Navigation in a High Ionosphere Orbital Environment

    NASA Technical Reports Server (NTRS)

    Conrad, Patrick R.; Naasz, Bo J.

    2007-01-01

    The Global Positioning System (GPS) provides a convenient source for space vehicle relative navigation measurements, especially for low Earth orbit formation flying and autonomous rendezvous mission concepts. For single-frequency GPS receivers, ionospheric path delay can be a significant error source if not properly mitigated. In particular, ionospheric effects are known to cause significant radial position error bias and add dramatically to relative state estimation error if the onboard navigation software does not force the use of measurements from common or shared GPS space vehicles. Results from GPS navigation simulations are presented for a pair of space vehicles flying in formation and using GPS pseudorange measurements to perform absolute and relative orbit determination. With careful measurement selection techniques relative state estimation accuracy to less than 20 cm with standard GPS pseudorange processing and less than 10 cm with single-differenced pseudorange processing is shown.

  19. Feed-forward frequency offset estimation for 32-QAM optical coherent detection.

    PubMed

    Xiao, Fei; Lu, Jianing; Fu, Songnian; Xie, Chenhui; Tang, Ming; Tian, Jinwen; Liu, Deming

    2017-04-17

    Due to the non-rectangular distribution of the constellation points, traditional fast Fourier transform based frequency offset estimation (FFT-FOE) is no longer suitable for 32-QAM signal. Here, we report a modified FFT-FOE technique by selecting and digitally amplifying the inner QPSK ring of 32-QAM after the adaptive equalization, which is defined as QPSK-selection assisted FFT-FOE. Simulation results show that no FOE error occurs with a FFT size of only 512 symbols, when the signal-to-noise ratio (SNR) is above 17.5 dB using our proposed FOE technique. However, the error probability of traditional FFT-FOE scheme for 32-QAM is always intolerant. Finally, our proposed FOE scheme functions well for 10 Gbaud dual polarization (DP)-32-QAM signal to reach 20% forward error correction (FEC) threshold of BER=2×10-2, under the scenario of back-to-back (B2B) transmission.

  20. Peak-flow frequency relations and evaluation of the peak-flow gaging network in Nebraska

    USGS Publications Warehouse

    Soenksen, Philip J.; Miller, Lisa D.; Sharpe, Jennifer B.; Watton, Jason R.

    1999-01-01

    Estimates of peak-flow magnitude and frequency are required for the efficient design of structures that convey flood flows or occupy floodways, such as bridges, culverts, and roads. The U.S. Geological Survey, in cooperation with the Nebraska Department of Roads, conducted a study to update peak-flow frequency analyses for selected streamflow-gaging stations, develop a new set of peak-flow frequency relations for ungaged streams, and evaluate the peak-flow gaging-station network for Nebraska. Data from stations located in or within about 50 miles of Nebraska were analyzed using guidelines of the Interagency Advisory Committee on Water Data in Bulletin 17B. New generalized skew relations were developed for use in frequency analyses of unregulated streams. Thirty-three drainage-basin characteristics related to morphology, soils, and precipitation were quantified using a geographic information system, related computer programs, and digital spatial data.For unregulated streams, eight sets of regional regression equations relating drainage-basin to peak-flow characteristics were developed for seven regions of the state using a generalized least squares procedure. Two sets of regional peak-flow frequency equations were developed for basins with average soil permeability greater than 4 inches per hour, and six sets of equations were developed for specific geographic areas, usually based on drainage-basin boundaries. Standard errors of estimate for the 100-year frequency equations (1percent probability) ranged from 12.1 to 63.8 percent. For regulated reaches of nine streams, graphs of peak flow for standard frequencies and distance upstream of the mouth were estimated.The regional networks of streamflow-gaging stations on unregulated streams were analyzed to evaluate how additional data might affect the average sampling errors of the newly developed peak-flow equations for the 100-year frequency occurrence. Results indicated that data from new stations, rather than more data from existing stations, probably would produce the greatest reduction in average sampling errors of the equations.

  1. Neutron-Star Radius from a Population of Binary Neutron Star Mergers.

    PubMed

    Bose, Sukanta; Chakravarti, Kabir; Rezzolla, Luciano; Sathyaprakash, B S; Takami, Kentaro

    2018-01-19

    We show how gravitational-wave observations with advanced detectors of tens to several tens of neutron-star binaries can measure the neutron-star radius with an accuracy of several to a few percent, for mass and spatial distributions that are realistic, and with none of the sources located within 100 Mpc. We achieve such an accuracy by combining measurements of the total mass from the inspiral phase with those of the compactness from the postmerger oscillation frequencies. For estimating the measurement errors of these frequencies, we utilize analytical fits to postmerger numerical relativity waveforms in the time domain, obtained here for the first time, for four nuclear-physics equations of state and a couple of values for the mass. We further exploit quasiuniversal relations to derive errors in compactness from those frequencies. Measuring the average radius to well within 10% is possible for a sample of 100 binaries distributed uniformly in volume between 100 and 300 Mpc, so long as the equation of state is not too soft or the binaries are not too heavy. We also give error estimates for the Einstein Telescope.

  2. The Use of Radar-Based Products for Deriving Extreme Rainfall Frequencies Using Regional Frequency Analysis with Application in South Louisiana

    NASA Astrophysics Data System (ADS)

    Eldardiry, H. A.; Habib, E. H.

    2014-12-01

    Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the index flood technique are used in the RFA. A bootstrap technique procedure is carried out to account for the uncertainty in the distribution parameters to construct 90% confidence intervals (i.e., 5% and 95% confidence limits) on AMS-based precipitation frequency curves.

  3. Flood-frequency characteristics of Wisconsin streams

    USGS Publications Warehouse

    Walker, John F.; Peppler, Marie C.; Danz, Mari E.; Hubbard, Laura E.

    2017-05-22

    Flood-frequency characteristics for 360 gaged sites on unregulated rural streams in Wisconsin are presented for percent annual exceedance probabilities ranging from 0.2 to 50 using a statewide skewness map developed for this report. Equations of the relations between flood-frequency and drainage-basin characteristics were developed by multiple-regression analyses. Flood-frequency characteristics for ungaged sites on unregulated, rural streams can be estimated by use of the equations presented in this report. The State was divided into eight areas of similar physiographic characteristics. The most significant basin characteristics are drainage area, soil saturated hydraulic conductivity, main-channel slope, and several land-use variables. The standard error of prediction for the equation for the 1-percent annual exceedance probability flood ranges from 56 to 70 percent for Wisconsin Streams; these values are larger than results presented in previous reports. The increase in the standard error of prediction is likely due to increased variability of the annual-peak discharges, resulting in increased variability in the magnitude of flood peaks at higher frequencies. For each of the unregulated rural streamflow-gaging stations, a weighted estimate based on the at-site log Pearson type III analysis and the multiple regression results was determined. The weighted estimate generally has a lower uncertainty than either the Log Pearson type III or multiple regression estimates. For regulated streams, a graphical method for estimating flood-frequency characteristics was developed from the relations of discharge and drainage area for selected annual exceedance probabilities. Graphs for the major regulated streams in Wisconsin are presented in the report.

  4. Probabilities of Occurrence of Baro-Fuze System Errors More Than 1000 Feet Too Low Due to Ignoring the Meteorological Forecast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, B.N.

    1955-05-12

    Charts of the geographical distribution of the annual and seasonal D-values and their standard deviations at altitudes of 4500, 6000, and 7000 feeet over Eurasia are derived, which are used to estimate the frequency of baro system errors.

  5. Maximum-Likelihood Estimation for Frequency-Modulated Continuous-Wave Laser Ranging Using Photon-Counting Detectors

    DTIC Science & Technology

    2013-01-01

    are calculated from coherently -detected fields, e.g., coherent Doppler lidar . Our CRB results reveal that the best-case mean-square error scales as 1...1088 (2001). 7. K. Asaka, Y. Hirano, K. Tatsumi, K. Kasahara, and T. Tajime, “A pseudo-random frequency modulation continuous wave coherent lidar using...multiple returns,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 2170–2180 (2007). 11. T. J. Karr, “Atmospheric phase error in coherent laser radar

  6. Viscoelastic properties of soft gels: comparison of magnetic resonance elastography and dynamic shear testing in the shear wave regime

    NASA Astrophysics Data System (ADS)

    Okamoto, R. J.; Clayton, E. H.; Bayly, P. V.

    2011-10-01

    Magnetic resonance elastography (MRE) is used to quantify the viscoelastic shear modulus, G*, of human and animal tissues. Previously, values of G* determined by MRE have been compared to values from mechanical tests performed at lower frequencies. In this study, a novel dynamic shear test (DST) was used to measure G* of a tissue-mimicking material at higher frequencies for direct comparison to MRE. A closed-form solution, including inertial effects, was used to extract G* values from DST data obtained between 20 and 200 Hz. MRE was performed using cylindrical 'phantoms' of the same material in an overlapping frequency range of 100-400 Hz. Axial vibrations of a central rod caused radially propagating shear waves in the phantom. Displacement fields were fit to a viscoelastic form of Navier's equation using a total least-squares approach to obtain local estimates of G*. DST estimates of the storage G' (Re[G*]) and loss modulus G'' (Im[G*]) for the tissue-mimicking material increased with frequency from 0.86 to 0.97 kPa (20-200 Hz, n = 16), while MRE estimates of G' increased from 1.06 to 1.15 kPa (100-400 Hz, n = 6). The loss factor (Im[G*]/Re[G*]) also increased with frequency for both test methods: 0.06-0.14 (20-200 Hz, DST) and 0.11-0.23 (100-400 Hz, MRE). Close agreement between MRE and DST results at overlapping frequencies indicates that G* can be locally estimated with MRE over a wide frequency range. Low signal-to-noise ratio, long shear wavelengths and boundary effects were found to increase residual fitting error, reinforcing the use of an error metric to assess confidence in local parameter estimates obtained by MRE.

  7. Viscoelastic properties of soft gels: comparison of magnetic resonance elastography and dynamic shear testing in the shear wave regime.

    PubMed

    Okamoto, R J; Clayton, E H; Bayly, P V

    2011-10-07

    Magnetic resonance elastography (MRE) is used to quantify the viscoelastic shear modulus, G*, of human and animal tissues. Previously, values of G* determined by MRE have been compared to values from mechanical tests performed at lower frequencies. In this study, a novel dynamic shear test (DST) was used to measure G* of a tissue-mimicking material at higher frequencies for direct comparison to MRE. A closed-form solution, including inertial effects, was used to extract G* values from DST data obtained between 20 and 200 Hz. MRE was performed using cylindrical 'phantoms' of the same material in an overlapping frequency range of 100-400 Hz. Axial vibrations of a central rod caused radially propagating shear waves in the phantom. Displacement fields were fit to a viscoelastic form of Navier's equation using a total least-squares approach to obtain local estimates of G*. DST estimates of the storage G' (Re[G*]) and loss modulus G″ (Im[G*]) for the tissue-mimicking material increased with frequency from 0.86 to 0.97 kPa (20-200 Hz, n = 16), while MRE estimates of G' increased from 1.06 to 1.15 kPa (100-400 Hz, n = 6). The loss factor (Im[G*]/Re[G*]) also increased with frequency for both test methods: 0.06-0.14 (20-200 Hz, DST) and 0.11-0.23 (100-400 Hz, MRE). Close agreement between MRE and DST results at overlapping frequencies indicates that G* can be locally estimated with MRE over a wide frequency range. Low signal-to-noise ratio, long shear wavelengths and boundary effects were found to increase residual fitting error, reinforcing the use of an error metric to assess confidence in local parameter estimates obtained by MRE.

  8. Adaptive Kalman filter based on variance component estimation for the prediction of ionospheric delay in aiding the cycle slip repair of GNSS triple-frequency signals

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Xu, Tianhe; Yao, Yifei; Wang, Qianxin

    2018-01-01

    In order to incorporate the time smoothness of ionospheric delay to aid the cycle slip detection, an adaptive Kalman filter is developed based on variance component estimation. The correlations between measurements at neighboring epochs are fully considered in developing a filtering algorithm for colored measurement noise. Within this filtering framework, epoch-differenced ionospheric delays are predicted. Using this prediction, the potential cycle slips are repaired for triple-frequency signals of global navigation satellite systems. Cycle slips are repaired in a stepwise manner; i.e., for two extra wide lane combinations firstly and then for the third frequency. In the estimation for the third frequency, a stochastic model is followed in which the correlations between the ionospheric delay prediction errors and the errors in the epoch-differenced phase measurements are considered. The implementing details of the proposed method are tabulated. A real BeiDou Navigation Satellite System data set is used to check the performance of the proposed method. Most cycle slips, no matter trivial or nontrivial, can be estimated in float values with satisfactorily high accuracy and their integer values can hence be correctly obtained by simple rounding. To be more specific, all manually introduced nontrivial cycle slips are correctly repaired.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorpe, J. I.; Livas, J.; Maghami, P.

    Arm locking is a proposed laser frequency stabilization technique for the Laser Interferometer Space Antenna (LISA), a gravitational-wave observatory sensitive in the milliHertz frequency band. Arm locking takes advantage of the geometric stability of the triangular constellation of three spacecraft that compose LISA to provide a frequency reference with a stability in the LISA measurement band that exceeds that available from a standard reference such as an optical cavity or molecular absorption line. We have implemented a time-domain simulation of a Kalman-filter-based arm-locking system that includes the expected limiting noise sources as well as the effects of imperfect a priorimore » knowledge of the constellation geometry on which the design is based. We use the simulation to study aspects of the system performance that are difficult to capture in a steady-state frequency-domain analysis such as frequency pulling of the master laser due to errors in estimates of heterodyne frequency. We find that our implementation meets requirements on both the noise and dynamic range of the laser frequency with acceptable tolerances and that the design is sufficiently insensitive to errors in the estimated constellation geometry that the required performance can be maintained for the longest continuous measurement intervals expected for the LISA mission.« less

  10. Phase History Decomposition for efficient Scatterer Classification in SAR Imagery

    DTIC Science & Technology

    2011-09-15

    frequency. Professor Rick Martin provided key advice on frequency parameter estimation and the relationship between likelihood ratio testing and the least...132 6.1.1 Imaging Error Due to Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Subwindow Design and Weighting... test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 MF matched filter

  11. A fresh look at the predictors of naming accuracy and errors in Alzheimer's disease.

    PubMed

    Cuetos, Fernando; Rodríguez-Ferreiro, Javier; Sage, Karen; Ellis, Andrew W

    2012-09-01

    In recent years, a considerable number of studies have tried to establish which characteristics of objects and their names predict the responses of patients with Alzheimer's disease (AD) in the picture-naming task. The frequency of use of words and their age of acquisition (AoA) have been implicated as two of the most influential variables, with naming being best preserved for objects with high-frequency, early-acquired names. The present study takes a fresh look at the predictors of naming success in Spanish and English AD patients using a range of measures of word frequency and AoA along with visual complexity, imageability, and word length as predictors. Analyses using generalized linear mixed modelling found that naming accuracy was better predicted by AoA ratings taken from older adults than conventional ratings from young adults. Older frequency measures based on written language samples predicted accuracy better than more modern measures based on the frequencies of words in film subtitles. Replacing adult frequency with an estimate of cumulative (lifespan) frequency did not reduce the impact of AoA. Semantic error rates were predicted by both written word frequency and senior AoA while null response errors were only predicted by frequency. Visual complexity, imageability, and word length did not predict naming accuracy or errors. ©2012 The British Psychological Society.

  12. Estimating and comparing microbial diversity in the presence of sequencing errors

    PubMed Central

    Chiu, Chun-Huo

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872

  13. Multi-frequency bioelectrical impedance: a comparison between the Cole-Cole modelling and Hanai equations with the classical impedance index approach.

    PubMed

    Deurenberg, P; Andreoli, A; de Lorenzo, A

    1996-01-01

    Total body water and extracellular water were measured by deuterium oxide and bromide dilution respectively in 23 healthy males and 25 healthy females. In addition, total body impedance was measured at 17 frequencies, ranging from 1 kHz to 1350 kHz. Modelling programs were used to extrapolate impedance values to frequency zero (extracellular resistance) and frequency infinity (total body water resistance). Impedance indexes (height2/Zf) were computed at all 17 frequencies. The estimation errors of extracellular resistance and total body water resistance were 1% and 3%, respectively. Impedance and impedance index at low frequency were correlated with extracellular water, independent of the amount of total body water. Total body water showed the greatest correlation with impedance and impedance index at high frequencies. Extrapolated impedance values did not show a higher correlation compared to measured values. Prediction formulas from the literature applied to fixed frequencies showed the best mean and individual predictions for both extracellular water and total body water. It is concluded that, at least in healthy individuals with normal body water distribution, modelling impedance data has no advantage over impedance values measured at fixed frequencies, probably due to estimation errors in the modelled data.

  14. Q-adjusting technique applied to vertical deflections estimation in a single-axis rotation INS/GPS integrated system

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Wang, Xingshu; Wang, Jun; Dai, Dongkai; Xiong, Hao

    2016-10-01

    Former studies have proved that the attitude error in a single-axis rotation INS/GPS integrated system tracks the high frequency component of the deflections of the vertical (DOV) with a fixed delay and tracking error. This paper analyses the influence of the nominal process noise covariance matrix Q on the tracking error as well as the response delay, and proposed a Q-adjusting technique to obtain the attitude error which can track the DOV better. Simulation results show that different settings of Q lead to different response delay and tracking error; there exists optimal Q which leads to a minimum tracking error and a comparatively short response delay; for systems with different accuracy, different Q-adjusting strategy should be adopted. In this way, the DOV estimation accuracy of using the attitude error as the observation can be improved. According to the simulation results, the DOV estimation accuracy after using the Q-adjusting technique is improved by approximate 23% and 33% respectively compared to that of the Earth Model EGM2008 and the direct attitude difference method.

  15. Assessment of Modeled Received Sound Pressure Levels and Movements of Satellite-Tagged Odontocetes Exposed to Mid-Frequency Active Sonar at the Pacific Missile Range Facility: February 2011 Through February 2013

    DTIC Science & Technology

    2014-05-30

    respectively; Argos User’s Manual). LC1 locations (i.e., with estimated error between 500 and 1,500 m), as well as LC0 , LCA, LCB, and LCZ locations (i.e...locations included four LC2 (i.e., with estimated error of  m), 10 LC1s (i.e., with estimated error of ə.5 km), and four LC0s (i.e., with undefined...from the 53C are considered in detail below. Argos LCs for the 24 locations associated with 53C sonar levels included four LC2, 11 LC1, 7 LC0 , and 2

  16. a Climatology of Global Precipitation.

    NASA Astrophysics Data System (ADS)

    Legates, David Russell

    A global climatology of mean monthly precipitation has been developed using traditional land-based gage measurements as well as derived oceanic data. These data have been screened for coding errors and redundant entries have been removed. Oceanic precipitation estimates are most often extrapolated from coastal and island observations because few gage estimates of oceanic precipitation exist. One such procedure, developed by Dorman and Bourke and used here, employs a derived relationship between observed rainfall totals and the "current weather" at coastal stations. The combined data base contains 24,635 independent terrestial station records and 2223 oceanic grid-point records. Raingage catches are known to underestimate actual precipitation. Errors in the gage catch result from wind -field deformation, wetting losses, and evaporation from the gage and can amount to nearly 8, 2, and 1 percent of the global catch, respectively. A procedure has been developed to correct many of these errors and has been used to adjust the gage estimates of global precipitation. Space-time variations in gage type, air temperature, wind speed, and natural vegetation were incorporated into the correction procedure. Corrected data were then interpolated to the nodes of a 0.5^circ of latitude by 0.5^circ of longitude lattice using a spherically-based interpolation algorithm. Interpolation errors are largest in areas of low station density, rugged topography, and heavy precipitation. Interpolated estimates also were compared with a digital filtering technique to access the aliasing of high-frequency "noise" into the lower frequency signals. Isohyetal maps displaying the mean annual, seasonal, and monthly precipitation are presented. Gage corrections and the standard error of the corrected estimates also are mapped. Results indicate that mean annual global precipitation is 1123 mm with 1251 mm falling over the oceans and 820 mm over land. Spatial distributions of monthly precipitation generally are consistent with existing precipitation climatologies.

  17. Fundamental frequency estimation of singing voice

    NASA Astrophysics Data System (ADS)

    de Cheveigné, Alain; Henrich, Nathalie

    2002-05-01

    A method of fundamental frequency (F0) estimation recently developped for speech [de Cheveigné and Kawahara, J. Acoust. Soc. Am. (to be published)] was applied to singing voice. An electroglottograph signal recorded together with the microphone provided a reference by which estimates could be validated. Using standard parameter settings as for speech, error rates were low despite the wide range of F0s (about 100 to 1600 Hz). Most ``errors'' were due to irregular vibration of the vocal folds, a sharp formant resonance that reduced the waveform to a single harmonic, or fast F0 changes such as in high-amplitude vibrato. Our database (18 singers from baritone to soprano) included examples of diphonic singing for which melody is carried by variations of the frequency of a narrow formant rather than F0. Varying a parameter (ratio of inharmonic to total power) the algorithm could be tuned to follow either frequency. Although the method has not been formally tested on a wide range of instruments, it seems appropriate for musical applications because it is accurate, accepts a wide range of F0s, and can be implemented with low latency for interactive applications. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.

  18. Adaptive OFDM Radar Waveform Design for Improved Micro-Doppler Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata

    Here we analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a rotating target having multiple scattering centers. The use of a frequency-diverse OFDM signal enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. We characterize the accuracy of micro-Doppler frequency estimation by computing the Cramer-Rao bound (CRB) on the angular-velocity estimate of the target. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to themore » OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations with respect to the signal-to-noise ratios, number of temporal samples, and number of OFDM subcarriers. We also analysed numerically the improvement in estimation accuracy due to the adaptive waveform design. A grid-based maximum likelihood estimation technique is applied to evaluate the corresponding mean-squared error performance.« less

  19. Noncommuting observables in quantum detection and estimation theory

    NASA Technical Reports Server (NTRS)

    Helstrom, C. W.

    1972-01-01

    Basing decisions and estimates on simultaneous approximate measurements of noncommuting observables in a quantum receiver is shown to be equivalent to measuring commuting projection operators on a larger Hilbert space than that of the receiver itself. The quantum-mechanical Cramer-Rao inequalities derived from right logarithmic derivatives and symmetrized logarithmic derivatives of the density operator are compared, and it is shown that the latter give superior lower bounds on the error variances of individual unbiased estimates of arrival time and carrier frequency of a coherent signal. For a suitably weighted sum of the error variances of simultaneous estimates of these, the former yield the superior lower bound under some conditions.

  20. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  1. Triple-frequency radar retrievals of snowfall properties from the OLYMPEX field campaign

    NASA Astrophysics Data System (ADS)

    Leinonen, J. S.; Lebsock, M. D.; Sy, O. O.; Tanelli, S.

    2017-12-01

    Retrieval of snowfall properties with radar is subject to significant errors arising from the uncertainties in the size and structure of snowflakes. Recent modeling and theoretical studies have shown that multi-frequency radars can potentially constrain the microphysical properties and thus reduce the uncertainties in the retrieved snow water content. So far, there have only been limited efforts to leverage the theoretical advances in actual snowfall retrievals. In this study, we have implemented an algorithm that retrieves the snowfall properties from triple-frequency radar data using the radar scattering properties from a combination of snowflake scattering databases, which were derived using numerical scattering methods. Snowflake number concentration, characteristic size and density are derived using a combination of optimal estimation and Kalman smoothing; the snow water content and other bulk properties are then derived from these. The retrieval framework is probabilistic and thus naturally provides error estimates for the retrieved quantities. We tested the retrieval algorithm using data from the APR3 airborne radar flown onboard the NASA DC-8 aircraft during the Olympic Mountain Experiment (OLYMPEX) in late 2015. We demonstrated consistent retrieval of snow properties and smooth transition from single- and dual-frequency retrievals to using all three frequencies simultaneously. The error analysis shows that the retrieval accuracy is improved when additional frequencies are introduced. We also compare the findings to in situ measurements of snow properties as well as measurements by polarimetric ground-based radar.

  2. The effects of sampling frequency on the climate statistics of the European Centre for Medium-Range Weather Forecasts

    NASA Astrophysics Data System (ADS)

    Phillips, Thomas J.; Gates, W. Lawrence; Arpe, Klaus

    1992-12-01

    The effects of sampling frequency on the first- and second-moment statistics of selected European Centre for Medium-Range Weather Forecasts (ECMWF) model variables are investigated in a simulation of "perpetual July" with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the e-folding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and runoff, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature, and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over oceans. An appropriate sampling frequency for each model variable is obtained by comparing the estimates of first- and second-moment statistics determined at intervals ranging from 2 to 24 hours with the "best" estimates obtained from hourly sampling. Relatively accurate estimation of first- and second-moment climate statistics (10% errors in means, 20% errors in variances) can be achieved by sampling a model variable at intervals that usually are longer than the bandwidth of its time series but that often are shorter than its characteristic time scale. For the surface variables, sampling at intervals that are nonintegral divisors of a 24-hour day yields relatively more accurate time-mean statistics because of a reduction in errors associated with aliasing of the diurnal cycle and higher-frequency harmonics. The superior estimates of first-moment statistics are accompanied by inferior estimates of the variance of the daily means due to the presence of systematic biases, but these probably can be avoided by defining a different measure of low-frequency variability. Estimates of the intradiurnal variance of accumulated precipitation and surface runoff also are strongly impacted by the length of the storage interval. In light of these results, several alternative strategies for storage of the EMWF model variables are recommended.

  3. Magnitude and frequency of floods in Washington

    USGS Publications Warehouse

    Cummans, J.E.; Collings, Michael R.; Nasser, Edmund George

    1975-01-01

    Relations are provided to estimate the magnitude and frequency of floods on Washington streams. Annual-peak-flow data from stream gaging stations on unregulated streams having 1 years or more of record were used to determine a log-Pearson Type III frequency curve for each station. Flood magnitudes having recurrence intervals of 2, 5, i0, 25, 50, and 10years were then related to physical and climatic indices of the drainage basins by multiple-regression analysis using the Biomedical Computer Program BMDO2R. These regression relations are useful for estimating flood magnitudes of the specified recurrence intervals at ungaged or short-record sites. Separate sets of regression equations were defined for western and eastern parts of the State, and the State was further subdivided into 12 regions in which the annual floods exhibit similar flood characteristics. Peak flows are related most significantly in western Washington to drainage-area size and mean annual precipitation. In eastern Washington-they are related most significantly to drainage-area size, mean annual precipitation, and percentage of forest cover. Standard errors of estimate of the estimating relations range from 25 to 129 percent, and the smallest errors are generally associated with the more humid regions.

  4. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  5. Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data

    NASA Astrophysics Data System (ADS)

    Dias, Nelson Luís

    2018-01-01

    A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.

  6. Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Boucher, Matthew J.

    2017-01-01

    Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.

  7. cBathy: A robust algorithm for estimating nearshore bathymetry

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holman, Rob; Holland, K. Todd

    2013-01-01

    A three-part algorithm is described and tested to provide robust bathymetry maps based solely on long time series observations of surface wave motions. The first phase consists of frequency-dependent characterization of the wave field in which dominant frequencies are estimated by Fourier transform while corresponding wave numbers are derived from spatial gradients in cross-spectral phase over analysis tiles that can be small, allowing high-spatial resolution. Coherent spatial structures at each frequency are extracted by frequency-dependent empirical orthogonal function (EOF). In phase two, depths are found that best fit weighted sets of frequency-wave number pairs. These are subsequently smoothed in time in phase 3 using a Kalman filter that fills gaps in coverage and objectively averages new estimates of variable quality with prior estimates. Objective confidence intervals are returned. Tests at Duck, NC, using 16 surveys collected over 2 years showed a bias and root-mean-square (RMS) error of 0.19 and 0.51 m, respectively but were largest near the offshore limits of analysis (roughly 500 m from the camera) and near the steep shoreline where analysis tiles mix information from waves, swash and static dry sand. Performance was excellent for small waves but degraded somewhat with increasing wave height. Sand bars and their small-scale alongshore variability were well resolved. A single ground truth survey from a dissipative, low-sloping beach (Agate Beach, OR) showed similar errors over a region that extended several kilometers from the camera and reached depths of 14 m. Vector wave number estimates can also be incorporated into data assimilation models of nearshore dynamics.

  8. Analysis of the Magnitude and Frequency of Peak Discharges for the Navajo Nation in Arizona, Utah, Colorado, and New Mexico

    USGS Publications Warehouse

    Waltemeyer, Scott D.

    2006-01-01

    Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.

  9. Automatic Locking of Laser Frequency to an Absorption Peak

    NASA Technical Reports Server (NTRS)

    Koch, Grady J.

    2006-01-01

    An electronic system adjusts the frequency of a tunable laser, eventually locking the frequency to a peak in the optical absorption spectrum of a gas (or of a Fabry-Perot cavity that has an absorption peak like that of a gas). This system was developed to enable precise locking of the frequency of a laser used in differential absorption LIDAR measurements of trace atmospheric gases. This system also has great commercial potential as a prototype of means for precise control of frequencies of lasers in future dense wavelength-division-multiplexing optical communications systems. The operation of this system is completely automatic: Unlike in the operation of some prior laser-frequency-locking systems, there is ordinarily no need for a human operator to adjust the frequency manually to an initial value close enough to the peak to enable automatic locking to take over. Instead, this system also automatically performs the initial adjustment. The system (see Figure 1) is based on a concept of (1) initially modulating the laser frequency to sweep it through a spectral range that includes the desired absorption peak, (2) determining the derivative of the absorption peak with respect to the laser frequency for use as an error signal, (3) identifying the desired frequency [at the very top (which is also the middle) of the peak] as the frequency where the derivative goes to zero, and (4) thereafter keeping the frequency within a locking range and adjusting the frequency as needed to keep the derivative (the error signal) as close as possible to zero. More specifically, the system utilizes the fact that in addition to a zero crossing at the top of the absorption peak, the error signal also closely approximates a straight line in the vicinity of the zero crossing (see Figure 2). This vicinity is the locking range because the linearity of the error signal in this range makes it useful as a source of feedback for a proportional + integral + derivative control scheme that constantly adjusts the frequency in an effort to drive the error to zero. When the laser frequency deviates from the midpeak value but remains within the locking range, the magnitude and sign of the error signal indicate the amount of detuning and the control circuitry adjusts the frequency by what it estimates to be the negative of this amount in an effort to bring the error to zero.

  10. Participant characteristics associated with errors in self-reported energy intake from the Women's Health Initiative food-frequency questionnaire.

    PubMed

    Horner, Neilann K; Patterson, Ruth E; Neuhouser, Marian L; Lampe, Johanna W; Beresford, Shirley A; Prentice, Ross L

    2002-10-01

    Errors in self-reported dietary intake threaten inferences from studies relying on instruments such as food-frequency questionnaires (FFQs), food records, and food recalls. The objective was to quantify the magnitude, direction, and predictors of errors associated with energy intakes estimated from the Women's Health Initiative FFQ. Postmenopausal women (n = 102) provided data on sociodemographic and psychosocial characteristics that relate to errors in self-reported energy intake. Energy intake was objectively estimated as total energy expenditure, physical activity expenditure, and the thermic effect of food (10% addition to other components of total energy expenditure). Participants underreported energy intake on the FFQ by 20.8%; this error trended upward with younger age (P = 0.07) and social desirability (P = 0.09) but was not associated with body mass index (P = 0.95). The correlation coefficient between reported energy intake and total energy expenditure was 0.24; correlations were higher among women with less education, higher body mass index, and greater fat-free mass, social desirability, and dissatisfaction with perceived body size (all P < 0.10). Energy intake is generally underreported, and both the magnitude of the error and the association of the self-reporting with objectively estimated intake appear to vary by participant characteristics. Studies relying on self-reported intake should include objective measures of energy expenditure in a subset of participants to identify person-specific bias within the study population for the dietary self-reporting tool; these data should be used to calibrate the self-reported data as an integral aspect of diet and disease association studies.

  11. Statistical plant set estimation using Schroeder-phased multisinusoidal input design

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.

    1992-01-01

    A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.

  12. Extracting harmonic signal from a chaotic background with local linear model

    NASA Astrophysics Data System (ADS)

    Li, Chenlong; Su, Liyun

    2017-02-01

    In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.

  13. A new method for predicting response in complex linear systems. II. [under random or deterministic steady state excitation

    NASA Technical Reports Server (NTRS)

    Bogdanoff, J. L.; Kayser, K.; Krieger, W.

    1977-01-01

    The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.

  14. Modeling work zone crash frequency by quantifying measurement errors in work zone length.

    PubMed

    Yang, Hong; Ozbay, Kaan; Ozturk, Ozgur; Yildirimoglu, Mehmet

    2013-06-01

    Work zones are temporary traffic control zones that can potentially cause safety problems. Maintaining safety, while implementing necessary changes on roadways, is an important challenge traffic engineers and researchers have to confront. In this study, the risk factors in work zone safety evaluation were identified through the estimation of a crash frequency (CF) model. Measurement errors in explanatory variables of a CF model can lead to unreliable estimates of certain parameters. Among these, work zone length raises a major concern in this analysis because it may change as the construction schedule progresses generally without being properly documented. This paper proposes an improved modeling and estimation approach that involves the use of a measurement error (ME) model integrated with the traditional negative binomial (NB) model. The proposed approach was compared with the traditional NB approach. Both models were estimated using a large dataset that consists of 60 work zones in New Jersey. Results showed that the proposed improved approach outperformed the traditional approach in terms of goodness-of-fit statistics. Moreover it is shown that the use of the traditional NB approach in this context can lead to the overestimation of the effect of work zone length on the crash occurrence. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Momentum Flux Determination Using the Multi-beam Poker Flat Incoherent Scatter Radar

    NASA Technical Reports Server (NTRS)

    Nicolls, M. J.; Fritts, D. C.; Janches, Diego; Heinselman, C. J.

    2012-01-01

    In this paper, we develop an estimator for the vertical flux of horizontal momentum with arbitrary beam pointing, applicable to the case of arbitrary but fixed beam pointing with systems such as the Poker Flat Incoherent Scatter Radar (PFISR). This method uses information from all available beams to resolve the variances of the wind field in addition to the vertical flux of both meridional and zonal momentum, targeted for high-frequency wave motions. The estimator utilises the full covariance of the distributed measurements, which provides a significant reduction in errors over the direct extension of previously developed techniques and allows for the calculation of an error covariance matrix of the estimated quantities. We find that for the PFISR experiment, we can construct an unbiased and robust estimator of the momentum flux if sufficient and proper beam orientations are chosen, which can in the future be optimized for the expected frequency distribution of momentum-containing scales. However, there is a potential trade-off between biases and standard errors introduced with the new approach, which must be taken into account when assessing the momentum fluxes. We apply the estimator to PFISR measurements on 23 April 2008 and 21 December 2007, from 60-85 km altitude, and show expected results as compared to mean winds and in relation to the measured vertical velocity variances.

  16. Methods for estimating selected low-flow frequency statistics for unregulated streams in Kentucky

    USGS Publications Warehouse

    Martin, Gary R.; Arihood, Leslie D.

    2010-01-01

    This report provides estimates of, and presents methods for estimating, selected low-flow frequency statistics for unregulated streams in Kentucky including the 30-day mean low flows for recurrence intervals of 2 and 5 years (30Q2 and 30Q5) and the 7-day mean low flows for recurrence intervals of 5, 10, and 20 years (7Q2, 7Q10, and 7Q20). Estimates of these statistics are provided for 121 U.S. Geological Survey streamflow-gaging stations with data through the 2006 climate year, which is the 12-month period ending March 31 of each year. Data were screened to identify the periods of homogeneous, unregulated flows for use in the analyses. Logistic-regression equations are presented for estimating the annual probability of the selected low-flow frequency statistics being equal to zero. Weighted-least-squares regression equations were developed for estimating the magnitude of the nonzero 30Q2, 30Q5, 7Q2, 7Q10, and 7Q20 low flows. Three low-flow regions were defined for estimating the 7-day low-flow frequency statistics. The explicit explanatory variables in the regression equations include total drainage area and the mapped streamflow-variability index measured from a revised statewide coverage of this characteristic. The percentage of the station low-flow statistics correctly classified as zero or nonzero by use of the logistic-regression equations ranged from 87.5 to 93.8 percent. The average standard errors of prediction of the weighted-least-squares regression equations ranged from 108 to 226 percent. The 30Q2 regression equations have the smallest standard errors of prediction, and the 7Q20 regression equations have the largest standard errors of prediction. The regression equations are applicable only to stream sites with low flows unaffected by regulation from reservoirs and local diversions of flow and to drainage basins in specified ranges of basin characteristics. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features.

  17. Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-07-20

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  18. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  19. snpAD: An ancient DNA genotype caller.

    PubMed

    Prüfer, Kay

    2018-06-21

    The study of ancient genomes can elucidate the evolutionary past. However, analyses are complicated by base-modifications in ancient DNA molecules that result in errors in DNA sequences. These errors are particularly common near the ends of sequences and pose a challenge for genotype calling. I describe an iterative method that estimates genotype frequencies and errors along sequences to allow for accurate genotype calling from ancient sequences. The implementation of this method, called snpAD, performs well on high-coverage ancient data, as shown by simulations and by subsampling the data of a high-coverage Neandertal genome. Although estimates for low-coverage genomes are less accurate, I am able to derive approximate estimates of heterozygosity from several low-coverage Neandertals. These estimates show that low heterozygosity, compared to modern humans, was common among Neandertals. The C ++ code of snpAD is freely available at http://bioinf.eva.mpg.de/snpAD/. Supplementary data are available at Bioinformatics online.

  20. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  1. Enabling vendor independent photoacoustic imaging systems with asynchronous laser source

    NASA Astrophysics Data System (ADS)

    Wu, Yixuan; Zhang, Haichong K.; Boctor, Emad M.

    2018-02-01

    Channel data acquisition, and synchronization between laser excitation and PA signal acquisition, are two fundamental hardware requirements for photoacoustic (PA) imaging. Unfortunately, however, neither is equipped by most clinical ultrasound scanners. Therefore, less economical specialized research platforms are used in general, which hinders a smooth clinical transition of PA imaging. In previous studies, we have proposed an algorithm to achieve PA imaging using ultrasound post-beamformed (USPB) RF data instead of channel data. This work focuses on enabling clinical ultrasound scanners to implement PA imaging, without requiring synchronization between the laser excitation and PA signal acquisition. Laser synchronization is inherently consisted of two aspects: frequency and phase information. We synchronize without communicating the laser and the ultrasound scanner by investigating USPB images of a point-target phantom in two steps. First, frequency information is estimated by solving a nonlinear optimization problem, under the assumption that the segmented wave-front can only be beamformed into a single spot when synchronization is achieved. Second, after making frequencies of two systems identical, phase delay is estimated by optimizing the image quality while varying phase value. The proposed method is validated through simulation, by manually adding both frequency and phase errors, then applying the proposed algorithm to correct errors and reconstruct PA images. Compared with the ground truth, simulation results indicate that the remaining errors in frequency correction and phase correction are 0.28% and 2.34%, respectively, which affirm the potential of overcoming hardware barriers on PA imaging through software solution.

  2. Measuring systolic arterial blood pressure. Possible errors from extension tubes or disposable transducer domes.

    PubMed

    Rothe, C F; Kim, K C

    1980-11-01

    The purpose of this study was to evaluate the magnitude of possible error in the measurement of systolic blood pressure if disposable, built-in diaphragm, transducer domes or long extension tubes between the patient and pressure transducer are used. Sinusoidal or arterial pressure patterns were generated with specially designed equipment. With a long extension tube or trapped air bubbles, the resonant frequency of the catheter system was reduced so that the arterial pulse was amplified as it acted on the transducer and, thus, gave an erroneously high systolic pressure measurement. The authors found this error to be as much as 20 mm Hg. Trapped air bubbles, not stopcocks or connections, per se, lead to poor fidelity. The utility of a continuous catheter flush system (Sorenson, Intraflow) to estimate the resonant frequency and degree of damping of a catheter-transducer system is described, as are possibly erroneous conclusions. Given a rough estimate of the resonant frequency of a catheter-transducer system and the magnitude of overshoot in response to a pulse, the authors present a table to predict the magnitude of probable error. These studies confirm the variability and unreliability of static calibration that may occur using some safety diaphragm domes and show that the system frequency response is decreased if air bubbles are trapped between the diaphragms. The authors conclude that regular procedures should be established to evaluate the accuracy of the pressure measuring systems in use, the transducer should be placed as close to the patient as possible, the air bubbles should be assiduously eliminated from the system.

  3. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    NASA Astrophysics Data System (ADS)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  4. Linear quadratic stochastic control of atomic hydrogen masers.

    PubMed

    Koppang, P; Leland, R

    1999-01-01

    Data are given showing the results of using the linear quadratic Gaussian (LQG) technique to steer remote hydrogen masers to Coordinated Universal Time (UTC) as given by the United States Naval Observatory (USNO) via two-way satellite time transfer and the Global Positioning System (GPS). Data also are shown from the results of steering a hydrogen maser to the real-time USNO mean. A general overview of the theory behind the LQG technique also is given. The LQG control is a technique that uses Kalman filtering to estimate time and frequency errors used as input into a control calculation. A discrete frequency steer is calculated by minimizing a quadratic cost function that is dependent on both the time and frequency errors and the control effort. Different penalties, chosen by the designer, are assessed by the controller as the time and frequency errors and control effort vary from zero. With this feature, controllers can be designed to force the time and frequency differences between two standards to zero, either more or less aggressively depending on the application.

  5. A Frequency-Domain Multipath Parameter Estimation and Mitigation Method for BOC-Modulated GNSS Signals

    PubMed Central

    Sun, Chao; Feng, Wenquan; Du, Songlin

    2018-01-01

    As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589

  6. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  7. Ultrasound speckle tracking for radial, longitudinal and circumferential strain estimation of the carotid artery--an in vitro validation via sonomicrometry using clinical and high-frequency ultrasound.

    PubMed

    Larsson, Matilda; Heyde, Brecht; Kremer, Florence; Brodin, Lars-Åke; D'hooge, Jan

    2015-02-01

    Ultrasound speckle tracking for carotid strain assessment has in the past decade gained interest in studies of arterial stiffness and cardiovascular diseases. The aim of this study was to validate and directly contrast carotid strain assessment by speckle tracking applied on clinical and high-frequency ultrasound images in vitro. Four polyvinyl alcohol phantoms mimicking the carotid artery were constructed with different mechanical properties and connected to a pump generating carotid flow profiles. Gray-scale ultrasound long- and short-axis images of the phantoms were obtained using a standard clinical ultrasound system, Vivid 7 (GE Healthcare, Horten, Norway) and a high-frequency ultrasound system, Vevo 2100 (FUJIFILM, VisualSonics, Toronto, Canada) with linear-array transducers (12L/MS250). Radial, longitudinal and circumferential strains were estimated using an in-house speckle tracking algorithm and compared with reference strain acquired by sonomicrometry. Overall, the estimated strain corresponded well with the reference strain. The correlation between estimated peak strain in clinical ultrasound images and reference strain was 0.91 (p<0.001) for radial strain, 0.73 (p<0.001) for longitudinal strain and 0.90 (p<0.001) for circumferential strain and for high-frequency ultrasound images 0.95 (p<0.001) for radial strain, 0.93 (p<0.001) for longitudinal strain and 0.90 (p<0.001) for circumferential strain. A significant larger bias and root mean square error was found for circumferential strain estimation on clinical ultrasound images compared to high frequency ultrasound images, but no significant difference in bias and root mean square error was found for radial and longitudinal strain when comparing estimation on clinical and high-frequency ultrasound images. The agreement between sonomicrometry and speckle tracking demonstrates that carotid strain assessment by ultrasound speckle tracking is feasible. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Real-time precise orbit determination of LEO satellites using a single-frequency GPS receiver: Preliminary results of Chinese SJ-9A satellite

    NASA Astrophysics Data System (ADS)

    Sun, Xiucong; Han, Chao; Chen, Pei

    2017-10-01

    Spaceborne Global Positioning System (GPS) receivers are widely used for orbit determination of low-Earth-orbiting (LEO) satellites. With the improvement of measurement accuracy, single-frequency receivers are recently considered for low-cost small satellite missions. In this paper, a Schmidt-Kalman filter which processes single-frequency GPS measurements and broadcast ephemerides is proposed for real-time precise orbit determination of LEO satellites. The C/A code and L1 phase are linearly combined to eliminate the first-order ionospheric effects. Systematic errors due to ionospheric delay residual, group delay variation, phase center variation, and broadcast ephemeris errors, are lumped together into a noise term, which is modeled as a first-order Gauss-Markov process. In order to reduce computational complexity, the colored noise is considered rather than estimated in the orbit determination process. This ensures that the covariance matrix accurately represents the distribution of estimation errors without increasing the dimension of the state vector. The orbit determination algorithm is tested with actual flight data from the single-frequency GPS receiver onboard China's small satellite Shi Jian-9A (SJ-9A). Preliminary results using a 7-h data arc on October 25, 2012 show that the Schmidt-Kalman filter performs better than the standard Kalman filter in terms of accuracy.

  9. Estimation of Handling Qualities Parameters of the Tu-144 Supersonic Transport Aircraft from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Curry, Timothy J.; Batterson, James G. (Technical Monitor)

    2000-01-01

    Low order equivalent system (LOES) models for the Tu-144 supersonic transport aircraft were identified from flight test data. The mathematical models were given in terms of transfer functions with a time delay by the military standard MIL-STD-1797A, "Flying Qualities of Piloted Aircraft," and the handling qualities were predicted from the estimated transfer function coefficients. The coefficients and the time delay in the transfer functions were estimated using a nonlinear equation error formulation in the frequency domain. Flight test data from pitch, roll, and yaw frequency sweeps at various flight conditions were used for parameter estimation. Flight test results are presented in terms of the estimated parameter values, their standard errors, and output fits in the time domain. Data from doublet maneuvers at the same flight conditions were used to assess the predictive capabilities of the identified models. The identified transfer function models fit the measured data well and demonstrated good prediction capabilities. The Tu-144 was predicted to be between level 2 and 3 for all longitudinal maneuvers and level I for all lateral maneuvers. High estimates of the equivalent time delay in the transfer function model caused the poor longitudinal rating.

  10. Maximum-Likelihood Estimation for Frequency-Modulated Continuous-Wave Laser Ranging using Photon-Counting Detectors

    DTIC Science & Technology

    2013-03-21

    instruments where frequency estimates are calcu- lated from coherently detected fields, e.g., coherent Doppler LIDAR . Our CRB results reveal that the best...wave coherent lidar using an optical field correlation detection method,” Opt. Rev. 5, 310–314 (1998). 8. H. P. Yuen and V. W. S. Chan, “Noise in...2170–2180 (2007). 13. T. J. Karr, “Atmospheric phase error in coherent laser radar,” IEEE Trans. Antennas Propag. 55, 1122–1133 (2007). 14. Throughout

  11. ViVaMBC: estimating viral sequence variation in complex populations from illumina deep-sequencing data using model-based clustering.

    PubMed

    Verbist, Bie; Clement, Lieven; Reumers, Joke; Thys, Kim; Vapirev, Alexander; Talloen, Willem; Wetzels, Yves; Meys, Joris; Aerssens, Jeroen; Bijnens, Luc; Thas, Olivier

    2015-02-22

    Deep-sequencing allows for an in-depth characterization of sequence variation in complex populations. However, technology associated errors may impede a powerful assessment of low-frequency mutations. Fortunately, base calls are complemented with quality scores which are derived from a quadruplet of intensities, one channel for each nucleotide type for Illumina sequencing. The highest intensity of the four channels determines the base that is called. Mismatch bases can often be corrected by the second best base, i.e. the base with the second highest intensity in the quadruplet. A virus variant model-based clustering method, ViVaMBC, is presented that explores quality scores and second best base calls for identifying and quantifying viral variants. ViVaMBC is optimized to call variants at the codon level (nucleotide triplets) which enables immediate biological interpretation of the variants with respect to their antiviral drug responses. Using mixtures of HCV plasmids we show that our method accurately estimates frequencies down to 0.5%. The estimates are unbiased when average coverages of 25,000 are reached. A comparison with the SNP-callers V-Phaser2, ShoRAH, and LoFreq shows that ViVaMBC has a superb sensitivity and specificity for variants with frequencies above 0.4%. Unlike the competitors, ViVaMBC reports a higher number of false-positive findings with frequencies below 0.4% which might partially originate from picking up artificial variants introduced by errors in the sample and library preparation step. ViVaMBC is the first method to call viral variants directly at the codon level. The strength of the approach lies in modeling the error probabilities based on the quality scores. Although the use of second best base calls appeared very promising in our data exploration phase, their utility was limited. They provided a slight increase in sensitivity, which however does not warrant the additional computational cost of running the offline base caller. Apparently a lot of information is already contained in the quality scores enabling the model based clustering procedure to adjust the majority of the sequencing errors. Overall the sensitivity of ViVaMBC is such that technical constraints like PCR errors start to form the bottleneck for low frequency variant detection.

  12. A time and frequency synchronization method for CO-OFDM based on CMA equalizers

    NASA Astrophysics Data System (ADS)

    Ren, Kaixuan; Li, Xiang; Huang, Tianye; Cheng, Zhuo; Chen, Bingwei; Wu, Xu; Fu, Songnian; Ping, Perry Shum

    2018-06-01

    In this paper, an efficient time and frequency synchronization method based on a new training symbol structure is proposed for polarization division multiplexing (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The coarse timing synchronization is achieved by exploiting the correlation property of the first training symbol, and the fine timing synchronization is accomplished by using the time-domain symmetric conjugate of the second training symbol. Furthermore, based on these training symbols, a constant modulus algorithm (CMA) is proposed for carrier frequency offset (CFO) estimation. Theoretical analysis and simulation results indicate that the algorithm has the advantages of robustness to poor optical signal-to-noise ratio (OSNR) and chromatic dispersion (CD). The frequency offset estimation range can achieve [ -Nsc/2 ΔfN , + Nsc/2 ΔfN ] GHz with the mean normalized estimation error below 12 × 10-3 even under the condition of OSNR as low as 10 dB.

  13. A comparison of three approaches to non-stationary flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Debele, S. E.; Strupczewski, W. G.; Bogdanowicz, E.

    2017-08-01

    Non-stationary flood frequency analysis (FFA) is applied to statistical analysis of seasonal flow maxima from Polish and Norwegian catchments. Three non-stationary estimation methods, namely, maximum likelihood (ML), two stage (WLS/TS) and GAMLSS (generalized additive model for location, scale and shape parameters), are compared in the context of capturing the effect of non-stationarity on the estimation of time-dependent moments and design quantiles. The use of a multimodel approach is recommended, to reduce the errors due to the model misspecification in the magnitude of quantiles. The results of calculations based on observed seasonal daily flow maxima and computer simulation experiments showed that GAMLSS gave the best results with respect to the relative bias and root mean square error in the estimates of trend in the standard deviation and the constant shape parameter, while WLS/TS provided better accuracy in the estimates of trend in the mean value. Within three compared methods the WLS/TS method is recommended to deal with non-stationarity in short time series. Some practical aspects of the GAMLSS package application are also presented. The detailed discussion of general issues related to consequences of climate change in the FFA is presented in the second part of the article entitled "Around and about an application of the GAMLSS package in non-stationary flood frequency analysis".

  14. Analysis of the Magnitude and Frequency of Peak Discharge and Maximum Observed Peak Discharge in New Mexico and Surrounding Areas

    USGS Publications Warehouse

    Waltemeyer, Scott D.

    2008-01-01

    Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent (mean value is 62, and median value is 59) for the 100-year flood. The 1996 investigation standard error of prediction for the flood regions ranged from 41 to 96 percent (mean value is 67, and median value is 68) for the 100-year flood that was analyzed by using generalized least-squares regression analysis. Overall, the equations based on generalized least-squares regression techniques are more reliable than those in the 1996 report because of the increased length of record and improved geographic information system (GIS) method to determine basin and climatic characteristics. Flood-frequency estimates can be made for ungaged sites upstream or downstream from gaging stations by using a method that transfers flood-frequency data at the gaging station to the ungaged site by using a drainage-area ratio adjustment equation. The peak discharge for a given recurrence interval at the gaging station, drainage-area ratio, and the drainage-area exponent from the regional regression equation of the respective region is used to transfer the peak discharge for the recurrence interval to the ungaged site. Maximum observed peak discharge as related to drainage area was determined for New Mexico. Extreme events are commonly used in the design and appraisal of bridge crossings and other structures. Bridge-scour evaluations are commonly made by using the 500-year peak discharge for these appraisals. Peak-discharge data collected at 293 gaging stations and 367 miscellaneous sites were used to develop a maximum peak-discharge relation as an alternative method of estimating peak discharge of an extreme event such as a maximum probable flood.

  15. Variable is better than invariable: sparse VSS-NLMS algorithms with application to adaptive MIMO channel estimation.

    PubMed

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.

  16. Variable Is Better Than Invariable: Sparse VSS-NLMS Algorithms with Application to Adaptive MIMO Channel Estimation

    PubMed Central

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286

  17. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  18. Cocaine Dependence Treatment Data: Methods for Measurement Error Problems With Predictors Derived From Stationary Stochastic Processes

    PubMed Central

    Guan, Yongtao; Li, Yehua; Sinha, Rajita

    2011-01-01

    In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854

  19. Analysis of the load selection on the error of source characteristics identification for an engine exhaust system

    NASA Astrophysics Data System (ADS)

    Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin

    2015-05-01

    Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.

  20. Evaluation of causes and frequency of medication errors during information technology downtime.

    PubMed

    Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F

    2009-06-15

    The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.

  1. Effects of diffraction by ionospheric electron density irregularities on the range error in GNSS dual-frequency positioning and phase decorrelation

    NASA Astrophysics Data System (ADS)

    Gherm, Vadim E.; Zernov, Nikolay N.; Strangeways, Hal J.

    2011-06-01

    It can be important to determine the correlation of different frequency signals in L band that have followed transionospheric paths. In the future, both GPS and the new Galileo satellite system will broadcast three frequencies enabling more advanced three frequency correction schemes so that knowledge of correlations of different frequency pairs for scintillation conditions is desirable. Even at present, it would be helpful to know how dual-frequency Global Navigation Satellite Systems positioning can be affected by lack of correlation between the L1 and L2 signals. To treat this problem of signal correlation for the case of strong scintillation, a previously constructed simulator program, based on the hybrid method, has been further modified to simulate the fields for both frequencies on the ground, taking account of their cross correlation. Then, the errors in the two-frequency range finding method caused by scintillation have been estimated for particular ionospheric conditions and for a realistic fully three-dimensional model of the ionospheric turbulence. The results which are presented for five different frequency pairs (L1/L2, L1/L3, L1/L5, L2/L3, and L2/L5) show the dependence of diffractional errors on the scintillation index S4 and that the errors diverge from a linear relationship, the stronger are scintillation effects, and may reach up to ten centimeters, or more. The correlation of the phases at spaced frequencies has also been studied and found that the correlation coefficients for different pairs of frequencies depend on the procedure of phase retrieval, and reduce slowly as both the variance of the electron density fluctuations and cycle slips increase.

  2. Two-voice fundamental frequency estimation

    NASA Astrophysics Data System (ADS)

    de Cheveigné, Alain

    2002-05-01

    An algorithm is presented that estimates the fundamental frequencies of two concurrent voices or instruments. The algorithm models each voice as a periodic function of time, and jointly estimates both periods by cancellation according to a previously proposed method [de Cheveigné and Kawahara, Speech Commun. 27, 175-185 (1999)]. The new algorithm improves on the old in several respects; it allows an unrestricted search range, effectively avoids harmonic and subharmonic errors, is more accurate (it uses two-dimensional parabolic interpolation), and is computationally less costly. It remains subject to unavoidable errors when periods are in certain simple ratios and the task is inherently ambiguous. The algorithm is evaluated on a small database including speech, singing voice, and instrumental sounds. It can be extended in several ways; to decide the number of voices, to handle amplitude variations, and to estimate more than two voices (at the expense of increased processing cost and decreased reliability). It makes no use of instrument models, learned or otherwise, although it could usefully be combined with such models. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.

  3. Semiblind channel estimation for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Sheng; Song, Jyu-Han

    2012-12-01

    This article proposes a semiblind channel estimation method for multiple-input multiple-output orthogonal frequency-division multiplexing systems based on circular precoding. Relying on the precoding scheme at the transmitters, the autocorrelation matrix of the received data induces a structure relating the outer product of the channel frequency response matrix and precoding coefficients. This structure makes it possible to extract information about channel product matrices, which can be used to form a Hermitian matrix whose positive eigenvalues and corresponding eigenvectors yield the channel impulse response matrix. This article also tests the resistance of the precoding design to finite-sample estimation errors, and explores the effects of the precoding scheme on channel equalization by performing pairwise error probability analysis. The proposed method is immune to channel zero locations, and is reasonably robust to channel order overestimation. The proposed method is applicable to the scenarios in which the number of transmitters exceeds that of the receivers. Simulation results demonstrate the performance of the proposed method and compare it with some existing methods.

  4. Performance enhancement of wireless mobile adhoc networks through improved error correction and ICI cancellation

    NASA Astrophysics Data System (ADS)

    Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar

    2012-12-01

    Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.

  5. The influence of a time-varying least squares parametric model when estimating SFOAEs evoked with swept-frequency tones

    NASA Astrophysics Data System (ADS)

    Hajicek, Joshua J.; Selesnick, Ivan W.; Henin, Simon; Talmadge, Carrick L.; Long, Glenis R.

    2018-05-01

    Stimulus frequency otoacoustic emissions (SFOAEs) were evoked and estimated using swept-frequency tones with and without the use of swept suppressor tones. SFOAEs were estimated using a least-squares fitting procedure. The estimated SFOAEs for the two paradigms (with- and without-suppression) were similar in amplitude and phase. The fitting procedure minimizes the square error between a parametric model of total ear-canal pressure (with unknown amplitudes and phases) and ear-canal pressure acquired during each paradigm. Modifying the parametric model to allow SFOAE amplitude and phase to vary over time revealed additional amplitude and phase fine structure in the without-suppressor, but not the with-suppressor paradigm. The use of a time-varying parametric model to estimate SFOAEs without-suppression may provide additional information about cochlear mechanics not available when using a with-suppressor paradigm.

  6. Estimation of Magnitude and Frequency of Floods for Streams on the Island of Oahu, Hawaii

    USGS Publications Warehouse

    Wong, Michael F.

    1994-01-01

    This report describes techniques for estimating the magnitude and frequency of floods for the island of Oahu. The log-Pearson Type III distribution and methodology recommended by the Interagency Committee on Water Data was used to determine the magnitude and frequency of floods at 79 gaging stations that had 11 to 72 years of record. Multiple regression analysis was used to construct regression equations to transfer the magnitude and frequency information from gaged sites to ungaged sites. Oahu was divided into three hydrologic regions to define relations between peak discharge and drainage-basin and climatic characteristics. Regression equations are provided to estimate the 2-, 5-, 10-, 25-, 50-, and 100-year peak discharges at ungaged sites. Significant basin and climatic characteristics included in the regression equations are drainage area, median annual rainfall, and the 2-year, 24-hour rainfall intensity. Drainage areas for sites used in this study ranged from 0.03 to 45.7 square miles. Standard error of prediction for the regression equations ranged from 34 to 62 percent. Peak-discharge data collected through water year 1988, geographic information system (GIS) technology, and generalized least-squares regression were used in the analyses. The use of GIS seems to be a more flexible and consistent means of defining and calculating basin and climatic characteristics than using manual methods. Standard errors of estimate for the regression equations in this report are an average of 8 percent less than those published in previous studies.

  7. Accuracy of heart rate variability estimation by photoplethysmography using an smartphone: Processing optimization and fiducial point selection.

    PubMed

    Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A

    2015-08-01

    This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.

  8. The National Streamflow Statistics Program: A Computer Program for Estimating Streamflow Statistics for Ungaged Sites

    USGS Publications Warehouse

    Ries(compiler), Kernell G.; With sections by Atkins, J. B.; Hummel, P.R.; Gray, Matthew J.; Dusenbury, R.; Jennings, M.E.; Kirby, W.H.; Riggs, H.C.; Sauer, V.B.; Thomas, W.O.

    2007-01-01

    The National Streamflow Statistics (NSS) Program is a computer program that should be useful to engineers, hydrologists, and others for planning, management, and design applications. NSS compiles all current U.S. Geological Survey (USGS) regional regression equations for estimating streamflow statistics at ungaged sites in an easy-to-use interface that operates on computers with Microsoft Windows operating systems. NSS expands on the functionality of the USGS National Flood Frequency Program, and replaces it. The regression equations included in NSS are used to transfer streamflow statistics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, the equations were developed on a statewide or metropolitan-area basis as part of cooperative study programs. Equations are available for estimating rural and urban flood-frequency statistics, such as the 1 00-year flood, for every state, for Puerto Rico, and for the island of Tutuila, American Samoa. Equations are available for estimating other statistics, such as the mean annual flow, monthly mean flows, flow-duration percentiles, and low-flow frequencies (such as the 7-day, 0-year low flow) for less than half of the states. All equations available for estimating streamflow statistics other than flood-frequency statistics assume rural (non-regulated, non-urbanized) conditions. The NSS output provides indicators of the accuracy of the estimated streamflow statistics. The indicators may include any combination of the standard error of estimate, the standard error of prediction, the equivalent years of record, or 90 percent prediction intervals, depending on what was provided by the authors of the equations. The program includes several other features that can be used only for flood-frequency estimation. These include the ability to generate flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals, estimates of the probable maximum flood, extrapolation of the 500-year flood when an equation for estimating it is not available, and weighting techniques to improve flood-frequency estimates for gaging stations and ungaged sites on gaged streams. This report describes the regionalization techniques used to develop the equations in NSS and provides guidance on the applicability and limitations of the techniques. The report also includes a users manual and a summary of equations available for estimating basin lagtime, which is needed by the program to generate flood hydrographs. The NSS software and accompanying database, and the documentation for the regression equations included in NSS, are available on the Web at http://water.usgs.gov/software/.

  9. A New Method for Estimating the Effective Population Size from Allele Frequency Changes

    PubMed Central

    Pollak, Edward

    1983-01-01

    A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147

  10. Statistical properties of Fourier-based time-lag estimates

    NASA Astrophysics Data System (ADS)

    Epitropakis, A.; Papadakis, I. E.

    2016-06-01

    Context. The study of X-ray time-lag spectra in active galactic nuclei (AGN) is currently an active research area, since it has the potential to illuminate the physics and geometry of the innermost region (I.e. close to the putative super-massive black hole) in these objects. To obtain reliable information from these studies, the statistical properties of time-lags estimated from data must be known as accurately as possible. Aims: We investigated the statistical properties of Fourier-based time-lag estimates (I.e. based on the cross-periodogram), using evenly sampled time series with no missing points. Our aim is to provide practical "guidelines" on estimating time-lags that are minimally biased (I.e. whose mean is close to their intrinsic value) and have known errors. Methods: Our investigation is based on both analytical work and extensive numerical simulations. The latter consisted of generating artificial time series with various signal-to-noise ratios and sampling patterns/durations similar to those offered by AGN observations with present and past X-ray satellites. We also considered a range of different model time-lag spectra commonly assumed in X-ray analyses of compact accreting systems. Results: Discrete sampling, binning and finite light curve duration cause the mean of the time-lag estimates to have a smaller magnitude than their intrinsic values. Smoothing (I.e. binning over consecutive frequencies) of the cross-periodogram can add extra bias at low frequencies. The use of light curves with low signal-to-noise ratio reduces the intrinsic coherence, and can introduce a bias to the sample coherence, time-lag estimates, and their predicted error. Conclusions: Our results have direct implications for X-ray time-lag studies in AGN, but can also be applied to similar studies in other research fields. We find that: a) time-lags should be estimated at frequencies lower than ≈ 1/2 the Nyquist frequency to minimise the effects of discrete binning of the observed time series; b) smoothing of the cross-periodogram should be avoided, as this may introduce significant bias to the time-lag estimates, which can be taken into account by assuming a model cross-spectrum (and not just a model time-lag spectrum); c) time-lags should be estimated by dividing observed time series into a number, say m, of shorter data segments and averaging the resulting cross-periodograms; d) if the data segments have a duration ≳ 20 ks, the time-lag bias is ≲15% of its intrinsic value for the model cross-spectra and power-spectra considered in this work. This bias should be estimated in practise (by considering possible intrinsic cross-spectra that may be applicable to the time-lag spectra at hand) to assess the reliability of any time-lag analysis; e) the effects of experimental noise can be minimised by only estimating time-lags in the frequency range where the sample coherence is larger than 1.2/(1 + 0.2m). In this range, the amplitude of noise variations caused by measurement errors is smaller than the amplitude of the signal's intrinsic variations. As long as m ≳ 20, time-lags estimated by averaging over individual data segments have analytical error estimates that are within 95% of the true scatter around their mean, and their distribution is similar, albeit not identical, to a Gaussian.

  11. Radar sensitivity and antenna scan pattern study for a satellite-based Radar Wind Sounder (RAWS)

    NASA Technical Reports Server (NTRS)

    Stuart, Michael A.

    1992-01-01

    Modeling global atmospheric circulations and forecasting the weather would improve greatly if worldwide information on winds aloft were available. Recognition of this led to the inclusion of the LAser Wind Sounder (LAWS) system to measure Doppler shifts from aerosols in the planned for Earth Observation System (EOS). However, gaps will exist in LAWS coverage where heavy clouds are present. The RAdar Wind Sensor (RAWS) is an instrument that could fill these gaps by measuring Doppler shifts from clouds and rain. Previous studies conducted at the University of Kansas show RAWS as a feasible instrument. This thesis pertains to the signal-to-noise ratio (SNR) sensitivity, transmit waveform, and limitations to the antenna scan pattern of the RAWS system. A dop-size distribution model is selected and applied to the radar range equation for the sensitivity analysis. Six frequencies are used in computing the SNR for several cloud types to determine the optimal transmit frequency. the results show the use of two frequencies, one higher (94 GHz) to obtain sensitivity for thinner cloud, and a lower frequency (24 GHz) to obtain sensitivity for thinner cloud, and a lower frequency (24 GHz) for better penetration in rain, provide ample SNR. The waveform design supports covariance estimation processing. This estimator eliminates the Doppler ambiguities compounded by the selection of such high transmit frequencies, while providing an estimate of the mean frequency. the unambiguous range and velocity computation shows them to be within acceptable limits. The design goal for the RAWS system is to limit the wind-speed error to less than 1 ms(exp -1). Due to linear dependence between vectors for a three-vector scan pattern, a reasonable wind-speed error is unattainable. Only the two-vector scan pattern falls within the wind-error limits for azimuth angles between 16 deg to 70 deg. However, this scan only allows two components of the wind to be determined. As a result, a technique is then shown, based on the Z-R-V relationships, that permit the vertical component (i.e., rain) to be computed. Thus the horizontal wind components may be obtained form the covariance estimator and the vertical component from the reflectivity factor. Finally, a new candidate system is introduced which summarizes the parameters taken from previous RAWS studies, or those modified in this thesis.

  12. Role-modeling and medical error disclosure: a national survey of trainees.

    PubMed

    Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani

    2014-03-01

    To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.

  13. An assessment of envelope-based demodulation in case of proximity of carrier and modulation frequencies

    NASA Astrophysics Data System (ADS)

    Shahriar, Md Rifat; Borghesani, Pietro; Randall, R. B.; Tan, Andy C. C.

    2017-11-01

    Demodulation is a necessary step in the field of diagnostics to reveal faults whose signatures appear as an amplitude and/or frequency modulation. The Hilbert transform has conventionally been used for the calculation of the analytic signal required in the demodulation process. However, the carrier and modulation frequencies must meet the conditions set by the Bedrosian identity for the Hilbert transform to be applicable for demodulation. This condition, basically requiring the carrier frequency to be sufficiently higher than the frequency of the modulation harmonics, is usually satisfied in many traditional diagnostic applications (e.g. vibration analysis of gear and bearing faults) due to the order-of-magnitude ratio between the carrier and modulation frequency. However, the diversification of the diagnostic approaches and applications shows cases (e.g. electrical signature analysis-based diagnostics) where the carrier frequency is in close proximity to the modulation frequency, thus challenging the applicability of the Bedrosian theorem. This work presents an analytic study to quantify the error introduced by the Hilbert transform-based demodulation when the Bedrosian identity is not satisfied and proposes a mitigation strategy to combat the error. An experimental study is also carried out to verify the analytical results. The outcome of the error analysis sets a confidence limit on the estimated modulation (both shape and magnitude) achieved through the Hilbert transform-based demodulation in case of violated Bedrosian theorem. However, the proposed mitigation strategy is found effective in combating the demodulation error aroused in this scenario, thus extending applicability of the Hilbert transform-based demodulation.

  14. Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Smith, Mark S.

    2008-01-01

    Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.

  15. Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Smith, Mark S.

    2010-01-01

    Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.

  16. Pseudorange error analysis for precise indoor positioning system

    NASA Astrophysics Data System (ADS)

    Pola, Marek; Bezoušek, Pavel

    2017-05-01

    There is a currently developed system of a transmitter indoor localization intended for fire fighters or members of rescue corps. In this system the transmitter of an ultra-wideband orthogonal frequency-division multiplexing signal position is determined by the time difference of arrival method. The position measurement accuracy highly depends on the directpath signal time of arrival estimation accuracy which is degraded by severe multipath in complicated environments such as buildings. The aim of this article is to assess errors in the direct-path signal time of arrival determination caused by multipath signal propagation and noise. Two methods of the direct-path signal time of arrival estimation are compared here: the cross correlation method and the spectral estimation method.

  17. Modeling and characterization of multipath in global navigation satellite system ranging signals

    NASA Astrophysics Data System (ADS)

    Weiss, Jan Peter

    The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.

  18. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    PubMed Central

    DelSole, T.; Tippett, M.K.; Pegion, K.

    2018-01-01

    Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973

  19. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    NASA Astrophysics Data System (ADS)

    Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.

    2018-04-01

    The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.

  20. [Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].

    PubMed

    Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling

    2013-12-01

    Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.

  1. A new method for determining the optimal lagged ensemble

    PubMed Central

    DelSole, T.; Tippett, M. K.; Pegion, K.

    2017-01-01

    Abstract We propose a general methodology for determining the lagged ensemble that minimizes the mean square forecast error. The MSE of a lagged ensemble is shown to depend only on a quantity called the cross‐lead error covariance matrix, which can be estimated from a short hindcast data set and parameterized in terms of analytic functions of time. The resulting parameterization allows the skill of forecasts to be evaluated for an arbitrary ensemble size and initialization frequency. Remarkably, the parameterization also can estimate the MSE of a burst ensemble simply by taking the limit of an infinitely small interval between initialization times. This methodology is applied to forecasts of the Madden Julian Oscillation (MJO) from version 2 of the Climate Forecast System version 2 (CFSv2). For leads greater than a week, little improvement is found in the MJO forecast skill when ensembles larger than 5 days are used or initializations greater than 4 times per day. We find that if the initialization frequency is too infrequent, important structures of the lagged error covariance matrix are lost. Lastly, we demonstrate that the forecast error at leads ≥10 days can be reduced by optimally weighting the lagged ensemble members. The weights are shown to depend only on the cross‐lead error covariance matrix. While the methodology developed here is applied to CFSv2, the technique can be easily adapted to other forecast systems. PMID:28580050

  2. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.

  3. CORRELATED ERRORS IN EARTH POINTING MISSIONS

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve; Patt, Frederick S.

    2005-01-01

    Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor measurement residuals, so some independent checks using imaging sensors are essential and derived science instrument attitude measurements can prove quite valuable in assessing the attitude accuracy.

  4. Similarity of Symbol Frequency Distributions with Heavy Tails

    NASA Astrophysics Data System (ADS)

    Gerlach, Martin; Font-Clos, Francesc; Altmann, Eduardo G.

    2016-04-01

    Quantifying the similarity between symbolic sequences is a traditional problem in information theory which requires comparing the frequencies of symbols in different sequences. In numerous modern applications, ranging from DNA over music to texts, the distribution of symbol frequencies is characterized by heavy-tailed distributions (e.g., Zipf's law). The large number of low-frequency symbols in these distributions poses major difficulties to the estimation of the similarity between sequences; e.g., they hinder an accurate finite-size estimation of entropies. Here, we show analytically how the systematic (bias) and statistical (fluctuations) errors in these estimations depend on the sample size N and on the exponent γ of the heavy-tailed distribution. Our results are valid for the Shannon entropy (α =1 ), its corresponding similarity measures (e.g., the Jensen-Shanon divergence), and also for measures based on the generalized entropy of order α . For small α 's, including α =1 , the errors decay slower than the 1 /N decay observed in short-tailed distributions. For α larger than a critical value α*=1 +1 /γ ≤2 , the 1 /N decay is recovered. We show the practical significance of our results by quantifying the evolution of the English language over the last two centuries using a complete α spectrum of measures. We find that frequent words change more slowly than less frequent words and that α =2 provides the most robust measure to quantify language change.

  5. Rainfall Measurement with a Ground Based Dual Frequency Radar

    NASA Technical Reports Server (NTRS)

    Takahashi, Nobuhiro; Horie, Hiroaki; Meneghini, Robert

    1997-01-01

    Dual frequency methods are one of the most useful ways to estimate precise rainfall rates. However, there are some difficulties in applying this method to ground based radars because of the existence of a blind zone and possible error in the radar calibration. Because of these problems, supplemental observations such as rain gauges or satellite link estimates of path integrated attenuation (PIA) are needed. This study shows how to estimate rainfall rate with a ground based dual frequency radar with rain gauge and satellite link data. Applications of this method to stratiform rainfall is also shown. This method is compared with single wavelength method. Data were obtained from a dual frequency (10 GHz and 35 GHz) multiparameter radar radiometer built by the Communications Research Laboratory (CRL), Japan, and located at NASA/GSFC during the spring of 1997. Optical rain gauge (ORG) data and broadcasting satellite signal data near the radar t location were also utilized for the calculation.

  6. Flight Investigation of Prescribed Simultaneous Independent Surface Excitations for Real-Time Parameter Identification

    NASA Technical Reports Server (NTRS)

    Moes, Timothy R.; Smith, Mark S.; Morelli, Eugene A.

    2003-01-01

    Near real-time stability and control derivative extraction is required to support flight demonstration of Intelligent Flight Control System (IFCS) concepts being developed by NASA, academia, and industry. Traditionally, flight maneuvers would be designed and flown to obtain stability and control derivative estimates using a postflight analysis technique. The goal of the IFCS concept is to be able to modify the control laws in real time for an aircraft that has been damaged in flight. In some IFCS implementations, real-time parameter identification (PID) of the stability and control derivatives of the damaged aircraft is necessary for successfully reconfiguring the control system. This report investigates the usefulness of Prescribed Simultaneous Independent Surface Excitations (PreSISE) to provide data for rapidly obtaining estimates of the stability and control derivatives. Flight test data were analyzed using both equation-error and output-error PID techniques. The equation-error PID technique is known as Fourier Transform Regression (FTR) and is a frequency-domain real-time implementation. Selected results were compared with a time-domain output-error technique. The real-time equation-error technique combined with the PreSISE maneuvers provided excellent derivative estimation in the longitudinal axis. However, the PreSISE maneuvers as presently defined were not adequate for accurate estimation of the lateral-directional derivatives.

  7. Why GPS makes distances bigger than they are

    PubMed Central

    Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried

    2016-01-01

    ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610

  8. Submillimeter, millimeter, and microwave spectral line catalogue

    NASA Technical Reports Server (NTRS)

    Poynter, R. L.; Pickett, H. M.

    1984-01-01

    This report describes a computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between 0 and 10000 GHz (i.e., wavelengths longer than 30 micrometers). The catalogue can be used as a planning guide or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue has been constructed using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalogue will add more atoms and molecules and update the present listings (151 species) as new data appear. The catalogue is available from the authors as a magnetic tape recorded in card images and as a set of microfiche records.

  9. Submillimeter, millimeter, and microwave spectral line catalogue

    NASA Technical Reports Server (NTRS)

    Poynter, R. L.; Pickett, H. M.

    1981-01-01

    A computer accessible catalogue of submillimeter, millimeter and microwave spectral lines in the frequency range between 0 and 3000 GHZ (i.e., wavelengths longer than 100 mu m) is presented which can be used a planning guide or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalogue will add more atoms and molecules and update the present listings (133 species) as new data appear. The catalogue is available as a magnetic tape recorded in card images and as a set of microfiche records.

  10. Estimates of Flow Duration, Mean Flow, and Peak-Discharge Frequency Values for Kansas Stream Locations

    USGS Publications Warehouse

    Perry, Charles A.; Wolock, David M.; Artman, Joshua C.

    2004-01-01

    Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.

  11. Dynamic Modeling from Flight Data with Unknown Time Skews

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2016-01-01

    A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.

  12. Atmospheric microwave refractivity and refraction

    NASA Technical Reports Server (NTRS)

    Yu, E.; Hodge, D. B.

    1980-01-01

    The atmospheric refractivity can be expressed as a function of temperature, pressure, water vapor content, and operating frequency. Based on twenty-year meteorological data, statistics of the atmospheric refractivity were obtained. These statistics were used to estimate the variation of dispersion, attenuation, and refraction effects on microwave and millimeter wave signals propagating along atmospheric paths. Bending angle, elevation angle error, and range error were also developed for an exponentially tapered, spherical atmosphere.

  13. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com; Johnson, Randall; Larson, Gary

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authorsmore » estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.« less

  14. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis.

    PubMed

    Zheng, Yuanshui; Johnson, Randall; Larson, Gary

    2016-06-01

    Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.

  15. Nonspinning numerical relativity waveform surrogates: assessing the model

    NASA Astrophysics Data System (ADS)

    Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.

  16. Chirplet Wigner-Ville distribution for time-frequency representation and its application

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chen, J.; Dong, G. M.

    2013-12-01

    This paper presents a Chirplet Wigner-Ville Distribution (CWVD) that is free for cross-term that usually occurs in Wigner-Ville distribution (WVD). By transforming the signal with frequency rotating operators, several mono-frequency signals without intermittent are obtained, WVD is applied to the rotated signals that is cross-term free, then some frequency shift operators corresponding to the rotating operator are utilized to relocate the signal‧s instantaneous frequencies (IFs). The operators‧ parameters come from the estimation of the IFs which are approached with a polynomial functions or spline functions. What is more, by analysis of error, the main factors for the performance of the novel method have been discovered and an effective signal extending method based on the IFs estimation has been developed to improve the energy concentration of WVD. The excellent performance of the novel method was manifested by applying it to estimate the IFs of some numerical signals and the echolocation signal emitted by the Large Brown Bat.

  17. Constant Switching Frequency DTC for Matrix Converter Fed Speed Sensorless Induction Motor Drive

    NASA Astrophysics Data System (ADS)

    Mir, Tabish Nazir; Singh, Bhim; Bhat, Abdul Hamid

    2018-05-01

    The paper presents a constant switching frequency scheme for speed sensorless Direct Torque Control (DTC) of Matrix Converter fed Induction Motor Drive. The use of matrix converter facilitates improved power quality on input as well as motor side, along with Input Power Factor control, besides eliminating the need for heavy passive elements. Moreover, DTC through Space Vector Modulation helps in achieving a fast control over the torque and flux of the motor, with added benefit of constant switching frequency. A constant switching frequency aids in maintaining desired power quality of AC mains current even at low motor speeds, and simplifies input filter design of the matrix converter, as compared to conventional hysteresis based DTC. Further, stator voltage estimation from sensed input voltage, and subsequent stator (and rotor) flux estimation is done. For speed sensorless operation, a Model Reference Adaptive System is used, which emulates the speed dependent rotor flux equations of the induction motor. The error between conventionally estimated rotor flux (reference model) and the rotor flux estimated through the adaptive observer is processed through PI controller to generate the rotor speed estimate.

  18. Near-infrared spectral tomography integrated with digital breast tomosynthesis: Effects of tissue scattering on optical data acquisition design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michaelsen, Kelly; Krishnaswamy, Venkat; Pogue, Brian W.

    2012-07-15

    Purpose: Design optimization and phantom validation of an integrated digital breast tomosynthesis (DBT) and near-infrared spectral tomography (NIRST) system targeting improvement in sensitivity and specificity of breast cancer detection is presented. Factors affecting instrumentation design include minimization of cost, complexity, and examination time while maintaining high fidelity NIRST measurements with sufficient information to recover accurate optical property maps. Methods: Reconstructed DBT slices from eight patients with abnormal mammograms provided anatomical information for the NIRST simulations. A limited frequency domain (FD) and extensive continuous wave (CW) NIRST system was modeled. The FD components provided tissue scattering estimations used in the reconstructionmore » of the CW data. Scattering estimates were perturbed to study the effects on hemoglobin recovery. Breast mimicking agar phantoms with inclusions were imaged using the combined DBT/NIRST system for comparison with simulation results. Results: Patient simulations derived from DBT images show successful reconstruction of both normal and malignant lesions in the breast. They also demonstrate the importance of accurately quantifying tissue scattering. Specifically, 20% errors in optical scattering resulted in 22.6% or 35.1% error in quantification of total hemoglobin concentrations, depending on whether scattering was over- or underestimated, respectively. Limited frequency-domain optical signal sampling provided two regions scattering estimates (for fat and fibroglandular tissues) that led to hemoglobin concentrations that reduced the error in the tumor region by 31% relative to when a single estimate of optical scattering was used throughout the breast volume of interest. Acquiring frequency-domain data with six wavelengths instead of three did not significantly improve the hemoglobin concentration estimates. Simulation results were confirmed through experiments in two-region breast mimicking gelatin phantoms. Conclusions: Accurate characterization of scattering is necessary for quantification of hemoglobin. Based on this study, a system design is described to optimally combine breast tomosynthesis with NIRST.« less

  19. Techniques for estimating the magnitude and frequency of floods in rural basins of South Carolina, 1999

    USGS Publications Warehouse

    Feaster, Toby D.; Tasker, Gary D.

    2002-01-01

    Data from 167 streamflow-gaging stations in or near South Carolina with 10 or more years of record through September 30, 1999, were used to develop two methods for estimating the magnitude and frequency of floods in South Carolina for rural ungaged basins that are not significantly affected by regulation. Flood frequency estimates for 54 gaged sites in South Carolina were computed by fitting the water-year peak flows for each site to a log-Pearson Type III distribution. As part of the computation of flood-frequency estimates for gaged sites, new values for generalized skew coefficients were developed. Flood-frequency analyses also were made for gaging stations that drain basins from more than one physiographic province. The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, updated these data from previous flood-frequency reports to aid officials who are active in floodplain management as well as those who design bridges, culverts, and levees, or other structures near streams where flooding is likely to occur. Regional regression analysis, using generalized least squares regression, was used to develop a set of predictive equations that can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for rural ungaged basins in the Blue Ridge, Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The predictive equations are all functions of drainage area. Average errors of prediction for these regression equations ranged from -16 to 19 percent for the 2-year recurrence-interval flow in the upper Coastal Plain to -34 to 52 percent for the 500-year recurrence interval flow in the lower Coastal Plain. A region-of-influence method also was developed that interactively estimates recurrence- interval flows for rural ungaged basins in the Blue Ridge of South Carolina. The region-of-influence method uses regression techniques to develop a unique relation between flow and basin characteristics for an individual watershed. This, then, can be used to estimate flows at ungaged sites. Because the computations required for this method are somewhat complex, a computer application was developed that performs the computations and compares the predictive errors for this method. The computer application includes the option of using the region-of-influence method, or the generalized least squares regression equations from this report to compute estimated flows and errors of prediction specific to each ungaged site. From a comparison of predictive errors using the region-of-influence method with those computed using the regional regression method, the region-of-influence method performed systematically better only in the Blue Ridge and is, therefore, not recommended for use in the other physiographic provinces. Peak-flow data for the South Carolina stations used in the regionalization study are provided in appendix A, which contains gaging station information, log-Pearson Type III statistics, information on stage-flow relations, and water-year peak stages and flows. For informational purposes, water-year peak-flow data for stations on regulated streams in South Carolina also are provided in appendix D. Other information pertaining to the regulated streams is provided in the text of the report.

  20. A Review of System Identification Methods Applied to Aircraft

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1983-01-01

    Airplane identification, equation error method, maximum likelihood method, parameter estimation in frequency domain, extended Kalman filter, aircraft equations of motion, aerodynamic model equations, criteria for the selection of a parsimonious model, and online aircraft identification are addressed.

  1. Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback

    PubMed Central

    Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching

    2017-01-01

    Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658

  2. Electrical and magnetic properties of rock and soil

    USGS Publications Warehouse

    Scott, J.H.

    1983-01-01

    Field and laboratory measurements have been made to determine the electrical conductivity, dielectric constant, and magnetic permeability of rock and soil in areas of interest in studies of electromagnetic pulse propagation. Conductivity is determined by making field measurements of apparent resisitivity at very low frequencies (0-20 cps), and interpreting the true resistivity of layers at various depths by curve-matching methods. Interpreted resistivity values are converted to corresponding conductivity values which are assumed to be applicable at 10^2 cps, an assumption which is considered valid because the conductivity of rock and soil is nearly constant at frequencies below 10^2 cps. Conductivity is estimated at higher frequencies (up to 10^6 cps) by using statistical correlations of three parameters obtained from laboratory measurements of rock and soil samples: conductivity at 10^2 cps, frequency and conductivity measured over the range 10^2 to 10^6 cps. Conductivity may also be estimated in this frequency range by using field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and conductivity measured over the range 10^2 to 10^6 cps. This method is less accurate because nonrandom variation of ion concentration in natural pore water introduces error. Dielectric constant is estimated in a similar manner from field-derived conductivity values applicable at 10^2 cps and statistical correlations of three parameters obtained from laboratory measurements of samples: conductivity measured at 10^2 cps, frequency, and dielectric constant measured over the frequency range 10^2 to 10^6 cps. Dielectric constant may also be estimated from field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and dielectric constant measured from 10^2 to 10^6 cps, but again, this method is less accurate because of variation of ion concentration of pore water. Special laboratory procedures are used to measure conductivity and dielectric constant of rock and soil samples. Electrode polarization errors are minimized by using an electrode system that is electrochemically reversible-with ions in pore water.

  3. Real-Time Parameter Estimation Using Output Error

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2014-01-01

    Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

  4. A comparative study of clock rate and drift estimation

    NASA Technical Reports Server (NTRS)

    Breakiron, Lee A.

    1994-01-01

    Five different methods of drift determination and four different methods of rate determination were compared using months of hourly phase and frequency data from a sample of cesium clocks and active hydrogen masers. Linear least squares on frequency is selected as the optimal method of determining both drift and rate, more on the basis of parameter parsimony and confidence measures than on random and systematic errors.

  5. Mitigating leakage errors due to cavity modes in a superconducting quantum computer

    NASA Astrophysics Data System (ADS)

    McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.

    2018-07-01

    A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.

  6. Pulse-echo sound speed estimation using second order speckle statistics

    NASA Astrophysics Data System (ADS)

    Rosado-Mendez, Ivan M.; Nam, Kibo; Madsen, Ernest L.; Hall, Timothy J.; Zagzebski, James A.

    2012-10-01

    This work presents a phantom-based evaluation of a method for estimating soft-tissue speeds of sound using pulse-echo data. The method is based on the improvement of image sharpness as the sound speed value assumed during beamforming is systematically matched to the tissue sound speed. The novelty of this work is the quantitative assessment of image sharpness by measuring the resolution cell size from the autocovariance matrix for echo signals from a random distribution of scatterers thus eliminating the need of strong reflectors. Envelope data were obtained from a fatty-tissue mimicking (FTM) phantom (sound speed = 1452 m/s) and a nonfatty-tissue mimicking (NFTM) phantom (1544 m/s) scanned with a linear array transducer on a clinical ultrasound system. Dependence on pulse characteristics was tested by varying the pulse frequency and amplitude. On average, sound speed estimation errors were -0.7% for the FTM phantom and -1.1% for the NFTM phantom. In general, no significant difference was found among errors from different pulse frequencies and amplitudes. The method is currently being optimized for the differentiation of diffuse liver diseases.

  7. Regional flood-frequency relations for streams with many years of no flow

    USGS Publications Warehouse

    Hjalmarson, Hjalmar W.; Thomas, Blakemore E.; ,

    1990-01-01

    In the southwestern United States, flood-frequency relations for streams that drain small arid basins are difficult to estimate, largely because of the extreme temporal and spatial variability of floods and the many years of no flow. A method is proposed that is based on the station-year method. The new method produces regional flood-frequency relations using all available annual peak-discharge data. The prediction errors for the relations are directly assessed using randomly selected subsamples of the annual peak discharges.

  8. Self-Tuning Adaptive-Controller Using Online Frequency Identification

    NASA Technical Reports Server (NTRS)

    Chiang, W. W.; Cannon, R. H., Jr.

    1985-01-01

    A real time adaptive controller was designed and tested successfully on a fourth order laboratory dynamic system which features very low structural damping and a noncolocated actuator sensor pair. The controller, implemented in a digital minicomputer, consists of a state estimator, a set of state feedback gains, and a frequency locked loop (FLL) for real time parameter identification. The FLL can detect the closed loop natural frequency of the system being controlled, calculate the mismatch between a plant parameter and its counterpart in the state estimator, and correct the estimator parameter in real time. The adaptation algorithm can correct the controller error and stabilize the system for more than 50% variation in the plant natural frequency, compared with a 10% stability margin in frequency variation for a fixed gain controller having the same performance at the nominal plant condition. After it has locked to the correct plant frequency, the adaptive controller works as well as the fixed gain controller does when there is no parameter mismatch. The very rapid convergence of this adaptive system is demonstrated experimentally, and can also be proven with simple root locus methods.

  9. Methods for estimating selected spring and fall low-flow frequency statistics for ungaged stream sites in Iowa, based on data through June 2014

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.

    2016-09-19

    A statewide study was led to develop regression equations for estimating three selected spring and three selected fall low-flow frequency statistics for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include spring (April through June) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and fall (October through December) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years. Estimates of the three selected spring statistics are provided for 241 U.S. Geological Survey continuous-record streamgages, and estimates of the three selected fall statistics are provided for 238 of these streamgages, using data through June 2014. Because only 9 years of fall streamflow record were available, three streamgages included in the development of the spring regression equations were not included in the development of the fall regression equations. Because of regulation, diversion, or urbanization, 30 of the 241 streamgages were not included in the development of the regression equations. The study area includes Iowa and adjacent areas within 50 miles of the Iowa border. Because trend analyses indicated statistically significant positive trends when considering the period of record for most of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. Geographic information system software was used to measure 63 selected basin characteristics for each of the 211streamgages used to develop the regional regression equations. The study area was divided into three low-flow regions that were defined in a previous study for the development of regional regression equations.Because several streamgages included in the development of regional regression equations have estimates of zero flow calculated from observed streamflow for selected spring and fall low-flow frequency statistics, the final equations for the three low-flow regions were developed using two types of regression analyses—left-censored and generalized-least-squares regression analyses. A total of 211 streamgages were included in the development of nine spring regression equations—three equations for each of the three low-flow regions. A total of 208 streamgages were included in the development of nine fall regression equations—three equations for each of the three low-flow regions. A censoring threshold was used to develop 15 left-censored regression equations to estimate the three fall low-flow frequency statistics for each of the three low-flow regions and to estimate the three spring low-flow frequency statistics for the southern and northwest regions. For the northeast region, generalized-least-squares regression was used to develop three equations to estimate the three spring low-flow frequency statistics. For the northeast region, average standard errors of prediction range from 32.4 to 48.4 percent for the spring equations and average standard errors of estimate range from 56.4 to 73.8 percent for the fall equations. For the northwest region, average standard errors of estimate range from 58.9 to 62.1 percent for the spring equations and from 83.2 to 109.4 percent for the fall equations. For the southern region, average standard errors of estimate range from 43.2 to 64.0 percent for the spring equations and from 78.1 to 78.7 percent for the fall equations.The regression equations are applicable only to stream sites in Iowa with low flows not substantially affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. The regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system application. StreamStats allows users to click on any ungaged stream site and compute estimates of the six selected spring and fall low-flow statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged site are provided. StreamStats also allows users to click on any Iowa streamgage to obtain computed estimates for the six selected spring and fall low-flow statistics.

  10. A two-step parameter optimization algorithm for improving estimation of optical properties using spatial frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Hu, Dong; Lu, Renfu; Ying, Yibin

    2018-03-01

    This research was aimed at optimizing the inverse algorithm for estimating the optical absorption (μa) and reduced scattering (μs‧) coefficients from spatial frequency domain diffuse reflectance. Studies were first conducted to determine the optimal frequency resolution and start and end frequencies in terms of the reciprocal of mean free path (1/mfp‧). The results showed that the optimal frequency resolution increased with μs‧ and remained stable when μs‧ was larger than 2 mm-1. The optimal end frequency decreased from 0.3/mfp‧ to 0.16/mfp‧ with μs‧ ranging from 0.4 mm-1 to 3 mm-1, while the optimal start frequency remained at 0 mm-1. A two-step parameter estimation method was proposed based on the optimized frequency parameters, which improved estimation accuracies by 37.5% and 9.8% for μa and μs‧, respectively, compared with the conventional one-step method. Experimental validations with seven liquid optical phantoms showed that the optimized algorithm resulted in the mean absolute errors of 15.4%, 7.6%, 5.0% for μa and 16.4%, 18.0%, 18.3% for μs‧ at the wavelengths of 675 nm, 700 nm, and 715 nm, respectively. Hence, implementation of the optimized parameter estimation method should be considered in order to improve the measurement of optical properties of biological materials when using spatial frequency domain imaging technique.

  11. Towards a novel look on low-frequency climate reconstructions

    NASA Astrophysics Data System (ADS)

    Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti

    2010-05-01

    Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.

  12. Limitations of Dower's inverse transform for the study of atrial loops during atrial fibrillation.

    PubMed

    Guillem, María S; Climent, Andreu M; Bollmann, Andreas; Husser, Daniela; Millet, José; Castells, Francisco

    2009-08-01

    Spatial characteristics of atrial fibrillatory waves have been extracted by using a vectorcardiogram (VCG) during atrial fibrillation (AF). However, the VCG is usually not recorded in clinical practice and atrial loops are derived from the 12-lead electrocardiogram (ECG). We evaluated the suitability of the reconstruction of orthogonal leads from the 12-lead ECG for fibrillatory waves in AF. We used the Physikalisch-Technische Bundesanstalt diagnostic ECG database, which contains 15 simultaneously recorded signals (12-lead ECG and three Frank orthogonal leads) of 13 patients during AF. Frank leads were derived from the 12-lead ECG by using Dower's inverse transform. Derived leads were then compared to true Frank leads in terms of the relative error achieved. We calculated the orientation of AF loops of both recorded orthogonal leads and derived leads and measured the difference in estimated orientation. Also, we investigated the relationship of errors in derivation with fibrillatory wave amplitude, frequency, wave residuum, and fit to a plane of the AF loops. Errors in derivation of AF loops were 68 +/- 31% and errors in the estimation of orientation were 35.85 +/- 20.43 degrees . We did not find any correlation among these errors and amplitude, frequency, or other parameters. In conclusion, Dower's inverse transform should not be used for the derivation of orthogonal leads from the 12-lead ECG for the analysis of fibrillatory wave loops in AF. Spatial parameters obtained after this derivation may differ from those obtained from recorded orthogonal leads.

  13. Frequency domain analysis of errors in cross-correlations of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri

    2016-12-01

    We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method is used to account for temporal correlation of noise cross-spectrum at low frequencies (0.05-0.2 Hz) near the ocean microseismic peaks.

  14. Transfer-function-parameter estimation from frequency response data: A FORTRAN program

    NASA Technical Reports Server (NTRS)

    Seidel, R. C.

    1975-01-01

    A FORTRAN computer program designed to fit a linear transfer function model to given frequency response magnitude and phase data is presented. A conjugate gradient search is used that minimizes the integral of the absolute value of the error squared between the model and the data. The search is constrained to insure model stability. A scaling of the model parameters by their own magnitude aids search convergence. Efficient computer algorithms result in a small and fast program suitable for a minicomputer. A sample problem with different model structures and parameter estimates is reported.

  15. Blood pool and tissue phase patient motion effects on 82rubidium PET myocardial blood flow quantification.

    PubMed

    Lee, Benjamin C; Moody, Jonathan B; Poitrasson-Rivière, Alexis; Melvin, Amanda C; Weinberg, Richard L; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2018-03-23

    Patient motion can lead to misalignment of left ventricular volumes of interest and subsequently inaccurate quantification of myocardial blood flow (MBF) and flow reserve (MFR) from dynamic PET myocardial perfusion images. We aimed to identify the prevalence of patient motion in both blood and tissue phases and analyze the effects of this motion on MBF and MFR estimates. We selected 225 consecutive patients that underwent dynamic stress/rest rubidium-82 chloride ( 82 Rb) PET imaging. Dynamic image series were iteratively reconstructed with 5- to 10-second frame durations over the first 2 minutes for the blood phase and 10 to 80 seconds for the tissue phase. Motion shifts were assessed by 3 physician readers from the dynamic series and analyzed for frequency, magnitude, time, and direction of motion. The effects of this motion isolated in time, direction, and magnitude on global and regional MBF and MFR estimates were evaluated. Flow estimates derived from the motion corrected images were used as the error references. Mild to moderate motion (5-15 mm) was most prominent in the blood phase in 63% and 44% of the stress and rest studies, respectively. This motion was observed with frequencies of 75% in the septal and inferior directions for stress and 44% in the septal direction for rest. Images with blood phase isolated motion had mean global MBF and MFR errors of 2%-5%. Isolating blood phase motion in the inferior direction resulted in mean MBF and MFR errors of 29%-44% in the RCA territory. Flow errors due to tissue phase isolated motion were within 1%. Patient motion was most prevalent in the blood phase and MBF and MFR errors increased most substantially with motion in the inferior direction. Motion correction focused on these motions is needed to reduce MBF and MFR errors.

  16. Revised techniques for estimating peak discharges from channel width in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.; Omang, R.J.

    1987-01-01

    This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)

  17. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  18. Receiver IQ mismatch estimation in PDM CO-OFDM system using training symbol

    NASA Astrophysics Data System (ADS)

    Peng, Dandan; Ma, Xiurong; Yao, Xin; Zhang, Haoyuan

    2017-07-01

    Receiver in-phase/quadrature (IQ) mismatch is hard to mitigate at the receiver via using conventional method in polarization division multiplexed (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. In this paper, a novel training symbol structure is proposed to estimate IQ mismatch and channel distortion. Combined this structure with Gram Schmidt orthogonalization procedure (GSOP) algorithm, we can get lower bit error rate (BER). Meanwhile, based on this structure one estimation method is deduced in frequency domain which can achieve the estimation of IQ mismatch and channel distortion independently and improve the system performance obviously. Numerical simulation shows that the proposed two methods have better performance than compared method at 100 Gb/s after 480 km fiber transmission. Besides, the calculation complexity is also analyzed.

  19. A comparative study of shear wave speed estimation techniques in optical coherence elastography applications

    NASA Astrophysics Data System (ADS)

    Zvietcovich, Fernando; Yao, Jianing; Chu, Ying-Ju; Meemon, Panomsak; Rolland, Jannick P.; Parker, Kevin J.

    2016-03-01

    Optical Coherence Elastography (OCE) is a widely investigated noninvasive technique for estimating the mechanical properties of tissue. In particular, vibrational OCE methods aim to estimate the shear wave velocity generated by an external stimulus in order to calculate the elastic modulus of tissue. In this study, we compare the performance of five acquisition and processing techniques for estimating the shear wave speed in simulations and experiments using tissue-mimicking phantoms. Accuracy, contrast-to-noise ratio, and resolution are measured for all cases. The first two techniques make the use of one piezoelectric actuator for generating a continuous shear wave propagation (SWP) and a tone-burst propagation (TBP) of 400 Hz over the gelatin phantom. The other techniques make use of one additional actuator located on the opposite side of the region of interest in order to create an interference pattern. When both actuators have the same frequency, a standing wave (SW) pattern is generated. Otherwise, when there is a frequency difference df between both actuators, a crawling wave (CrW) pattern is generated and propagates with less speed than a shear wave, which makes it suitable for being detected by the 2D cross-sectional OCE imaging. If df is not small compared to the operational frequency, the CrW travels faster and a sampled version of it (SCrW) is acquired by the system. Preliminary results suggest that TBP (error < 4.1%) and SWP (error < 6%) techniques are more accurate when compared to mechanical measurement test results.

  20. Application of the multiple PRF technique to resolve Doppler centroid estimation ambiguity for spaceborne SAR

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Curlander, J. C.

    1992-01-01

    Estimation of the Doppler centroid ambiguity is a necessary element of the signal processing for SAR systems with large antenna pointing errors. Without proper resolution of the Doppler centroid estimation (DCE) ambiguity, the image quality will be degraded in the system impulse response function and the geometric fidelity. Two techniques for resolution of DCE ambiguity for the spaceborne SAR are presented; they include a brief review of the range cross-correlation technique and presentation of a new technique using multiple pulse repetition frequencies (PRFs). For SAR systems, where other performance factors control selection of the PRF's, an algorithm is devised to resolve the ambiguity that uses PRF's of arbitrary numerical values. The performance of this multiple PRF technique is analyzed based on a statistical error model. An example is presented that demonstrates for the Shuttle Imaging Radar-C (SIR-C) C-band SAR, the probability of correct ambiguity resolution is higher than 95 percent for antenna attitude errors as large as 3 deg.

  1. System IDentification Programs for AirCraft (SIDPAC)

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2002-01-01

    A collection of computer programs for aircraft system identification is described and demonstrated. The programs, collectively called System IDentification Programs for AirCraft, or SIDPAC, were developed in MATLAB as m-file functions. SIDPAC has been used successfully at NASA Langley Research Center with data from many different flight test programs and wind tunnel experiments. SIDPAC includes routines for experiment design, data conditioning, data compatibility analysis, model structure determination, equation-error and output-error parameter estimation in both the time and frequency domains, real-time and recursive parameter estimation, low order equivalent system identification, estimated parameter error calculation, linear and nonlinear simulation, plotting, and 3-D visualization. An overview of SIDPAC capabilities is provided, along with a demonstration of the use of SIDPAC with real flight test data from the NASA Glenn Twin Otter aircraft. The SIDPAC software is available without charge to U.S. citizens by request to the author, contingent on the requestor completing a NASA software usage agreement.

  2. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  3. Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.

    PubMed

    Deboeck, Pascal R

    2010-08-06

    The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.

  4. An alternative approach to estimating rainfall rate by radar using propagation differential phase shift

    NASA Technical Reports Server (NTRS)

    Jameson, A. R.

    1994-01-01

    In this work it is shown that for frequencies from 3 to 13 GHz, the ratio of the specific propagation differential phase shift phi(sub DP) to the rainfall rate can be specified essentially independently of the form of the drop size distribution by a function only of the mass-weighted mean drop size D(sub m). This significantly reduces one source of substantial bias errors common to most other techniques for measuring rain by radar. For frequencies 9 GHz and greater, the coefficient can be well estimated from the ratio of the specific differential attenuation to phi(sub DP), while at nonattenuating frequencies such as 3 GHz, the coefficient can be well estimated using the differential reflectivity. In practice it appears that this approach yields better estimates of the rainfall rate than any other current technique. The best results are most likely at 13.80 GHz, followed by those at 2.80 GHz. An optimum radar system for measuring rain should probably include components at a both frequencies so that when signals at 13.8 GHz are lost because of attenuation, good measurements are still possible at the lower frequency.

  5. A novel aliasing-free subband information fusion approach for wideband sparse spectral estimation

    NASA Astrophysics Data System (ADS)

    Luo, Ji-An; Zhang, Xiao-Ping; Wang, Zhi

    2017-12-01

    Wideband sparse spectral estimation is generally formulated as a multi-dictionary/multi-measurement (MD/MM) problem which can be solved by using group sparsity techniques. In this paper, the MD/MM problem is reformulated as a single sparse indicative vector (SIV) recovery problem at the cost of introducing an additional system error. Thus, the number of unknowns is reduced greatly. We show that the system error can be neglected under certain conditions. We then present a new subband information fusion (SIF) method to estimate the SIV by jointly utilizing all the frequency bins. With orthogonal matching pursuit (OMP) leveraging the binary property of SIV's components, we develop a SIF-OMP algorithm to reconstruct the SIV. The numerical simulations demonstrate the performance of the proposed method.

  6. Methods for estimating magnitude and frequency of floods in Montana based on data through 1983

    USGS Publications Warehouse

    Omang, R.J.; Parrett, Charles; Hull, J.A.

    1986-01-01

    Equations are presented for estimating flood magnitudes for ungaged sites in Montana based on data through 1983. The State was divided into eight regions based on hydrologic conditions, and separate multiple regression equations were developed for each region. These equations relate annual flood magnitudes and frequencies to basin characteristics and are applicable only to natural flow streams. In three of the regions, equations also were developed relating flood magnitudes and frequencies to basin characteristics and channel geometry measurements. The standard errors of estimate for an exceedance probability of 1% ranged from 39% to 87%. Techniques are described for estimating annual flood magnitude and flood frequency information at ungaged sites based on data from gaged sites on the same stream. Included are curves relating flood frequency information to drainage area for eight major streams in the State. Maximum known flood magnitudes in Montana are compared with estimated 1 %-chance flood magnitudes and with maximum known floods in the United States. Values of flood magnitudes for selected exceedance probabilities and values of significant basin characteristics and channel geometry measurements for all gaging stations used in the analysis are tabulated. Included are 375 stations in Montana and 28 nearby stations in Canada and adjoining States. (Author 's abstract)

  7. Methods for estimating magnitude and frequency of floods in Arizona, developed with unregulated and rural peak-flow data through water year 2010

    USGS Publications Warehouse

    Paretti, Nicholas V.; Kennedy, Jeffrey R.; Turney, Lovina A.; Veilleux, Andrea G.

    2014-01-01

    The regional regression equations were integrated into the U.S. Geological Survey’s StreamStats program. The StreamStats program is a national map-based web application that allows the public to easily access published flood frequency and basin characteristic statistics. The interactive web application allows a user to select a point within a watershed (gaged or ungaged) and retrieve flood-frequency estimates derived from the current regional regression equations and geographic information system data within the selected basin. StreamStats provides users with an efficient and accurate means for retrieving the most up to date flood frequency and basin characteristic data. StreamStats is intended to provide consistent statistics, minimize user error, and reduce the need for large datasets and costly geographic information system software.

  8. Experimental Validation of Pulse Phase Tracking for X-Ray Pulsar Based

    NASA Technical Reports Server (NTRS)

    Anderson, Kevin

    2012-01-01

    Pulsars are a form of variable celestial source that have shown to be usable as aids for autonomous, deep space navigation. Particularly those sources emitting in the X-ray band are ideal for navigation due to smaller detector sizes. In this paper X-ray photons arriving from a pulsar are modeled as a non-homogeneous Poisson process. The method of pulse phase tracking is then investigated as a technique to measure the radial distance traveled by a spacecraft over an observation interval. A maximum-likelihood phase estimator (MLE) is used for the case where the observed frequency signal is constant. For the varying signal frequency case, an algorithm is used in which the observation window is broken up into smaller blocks over which an MLE is used. The outputs of this phase estimation process were then looped through a digital phase-locked loop (DPLL) in order to reduce the errors and produce estimates of the doppler frequency. These phase tracking algorithms were tested both in a computer simulation environment and using the NASA Goddard Space flight Center X-ray Navigation Laboratory Testbed (GXLT). This provided an experimental validation with photons being emitted by a modulated X-ray source and detected by a silicon-drift detector. Models of the Crab pulsar and the pulsar B1821-24 were used in order to generate test scenarios. Three different simulated detector trajectories were used to be tracked by the phase tracking algorithm: a stationary case, one with constant velocity, and one with constant acceleration. All three were performed in one-dimension along the line of sight to the pulsar. The first two had a constant signal frequency and the third had a time varying frequency. All of the constant frequency cases were processed using the MLE, and it was shown that they tracked the initial phase within 0.15% for the simulations and 2.5% in the experiments, based on an average of ten runs. The MLE-DPLL cascade version of the phase tracking algorithm was used in the varying frequency case. This resulted in tracking of the phase and frequency by the DPLL outputs in both the simulation and experimental environments. The crab pulsar was experimentally tested with a trajectory with a higher acceleration. In this case the phase error tended toward zero as the observation extended to 250 seconds and the doppler frequency error tended to zero in under 100 seconds.

  9. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    PubMed

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  10. Satellite Estimation of Daily Land Surface Water Vapor Pressure Deficit from AMSR- E

    NASA Astrophysics Data System (ADS)

    Jones, L. A.; Kimball, J. S.; McDonald, K. C.; Chan, S. K.; Njoku, E. G.; Oechel, W. C.

    2007-12-01

    Vapor pressure deficit (VPD) is a key variable for monitoring land surface water and energy exchanges, and estimating plant water stress. Multi-frequency day/night brightness temperatures from the Advanced Microwave Scanning Radiometer on EOS Aqua (AMSR-E) were used to estimate daily minimum and average near surface (2 m) air temperatures across a North American boreal-Arctic transect. A simple method for determining daily mean VPD (Pa) from AMSR-E air temperature retrievals was developed and validated against observations across a regional network of eight study sites ranging from boreal grassland and forest to arctic tundra. The method assumes that the dew point and minimum daily air temperatures tend to equilibrate in areas with low night time temperatures and relatively moist conditions. This assumption was tested by comparing the VPD algorithm results derived from site daily temperature observations against results derived from AMSR-E retrieved temperatures alone. An error analysis was conducted to determine the amount of error introduced in VPD estimates given known levels of error in satellite retrieved temperatures. Results indicate that the assumption generally holds for the high latitude study sites except for arid locations in mid-summer. VPD estimates using the method with AMSR-E retrieved temperatures compare favorably with site observations. The method can be applied to land surface temperature retrievals from any sensor with day and night surface or near-surface thermal measurements and shows potential for inferring near-surface wetness conditions where dense vegetation may hinder surface soil moisture retrievals from low-frequency microwave sensors. This work was carried out at The University of Montana, at San Diego State University, and at the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.

  11. A Bayesian estimation of the helioseismic solar age

    NASA Astrophysics Data System (ADS)

    Bonanno, A.; Fröhlich, H.-E.

    2015-08-01

    Context. The helioseismic determination of the solar age has been a subject of several studies because it provides us with an independent estimation of the age of the solar system. Aims: We present the Bayesian estimates of the helioseismic age of the Sun, which are determined by means of calibrated solar models that employ different equations of state and nuclear reaction rates. Methods: We use 17 frequency separation ratios r02(n) = (νn,l = 0-νn-1,l = 2)/(νn,l = 1-νn-1,l = 1) from 8640 days of low-ℓBiSON frequencies and consider three likelihood functions that depend on the handling of the errors of these r02(n) ratios. Moreover, we employ the 2010 CODATA recommended values for Newton's constant, solar mass, and radius to calibrate a large grid of solar models spanning a conceivable range of solar ages. Results: It is shown that the most constrained posterior distribution of the solar age for models employing Irwin EOS with NACRE reaction rates leads to t⊙ = 4.587 ± 0.007 Gyr, while models employing the Irwin EOS and Adelberger, et al. (2011, Rev. Mod. Phys., 83, 195) reaction rate have t⊙ = 4.569 ± 0.006 Gyr. Implementing OPAL EOS in the solar models results in reduced evidence ratios (Bayes factors) and leads to an age that is not consistent with the meteoritic dating of the solar system. Conclusions: An estimate of the solar age that relies on an helioseismic age indicator such as r02(n) turns out to be essentially independent of the type of likelihood function. However, with respect to model selection, abandoning any information concerning the errors of the r02(n) ratios leads to inconclusive results, and this stresses the importance of evaluating the trustworthiness of error estimates.

  12. Methods for estimating magnitude and frequency of 1-, 3-, 7-, 15-, and 30-day flood-duration flows in Arizona

    USGS Publications Warehouse

    Kennedy, Jeffrey R.; Paretti, Nicholas V.; Veilleux, Andrea G.

    2014-01-01

    Regression equations, which allow predictions of n-day flood-duration flows for selected annual exceedance probabilities at ungaged sites, were developed using generalized least-squares regression and flood-duration flow frequency estimates at 56 streamgaging stations within a single, relatively uniform physiographic region in the central part of Arizona, between the Colorado Plateau and Basin and Range Province, called the Transition Zone. Drainage area explained most of the variation in the n-day flood-duration annual exceedance probabilities, but mean annual precipitation and mean elevation were also significant variables in the regression models. Standard error of prediction for the regression equations varies from 28 to 53 percent and generally decreases with increasing n-day duration. Outside the Transition Zone there are insufficient streamgaging stations to develop regression equations, but flood-duration flow frequency estimates are presented at select streamgaging stations.

  13. Rethinking Headache Chronification

    PubMed Central

    Turner, Dana P.; Smitherman, Todd A.; Penzien, Donald B.; Lipton, Richard B.; Houle, Timothy T.

    2013-01-01

    The objective of this series is to examine several threats to the interpretation of headache chronification studies that arise from methodological issues. The study of headache chronification has extensively used longitudinal designs with two or more measurement occasions. Unfortunately, application of these designs when combined with the common practice of extreme score selection as well as the extant challenges in measuring headache frequency rates (eg, unreliability, regression to the mean), induces substantive threats to accurate interpretation of findings. Partitioning the amount of observed variance in rates of chronification and remission attributable to regression artifacts is a critical yet previously overlooked step to learning more about headache as a potentially progressive disease. In this series on rethinking headache chronification, we provide an overview of methodological issues in this area (this paper), highlight the influence of rounding error on estimates of headache frequency (second paper), examine the influence of random error and regression artifacts on estimates of chronification and remission (third paper), and consider future directions for this line of research (fourth paper). PMID:23721237

  14. Nonlinear convergence active vibration absorber for single and multiple frequency vibration control

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Yang, Bintang; Guo, Shufeng; Zhao, Wenqiang

    2017-12-01

    This paper presents a nonlinear convergence algorithm for active dynamic undamped vibration absorber (ADUVA). The damping of absorber is ignored in this algorithm to strengthen the vibration suppressing effect and simplify the algorithm at the same time. The simulation and experimental results indicate that this nonlinear convergence ADUVA can help significantly suppress vibration caused by excitation of both single and multiple frequency. The proposed nonlinear algorithm is composed of equivalent dynamic modeling equations and frequency estimator. Both the single and multiple frequency ADUVA are mathematically imitated by the same mechanical structure with a mass body and a voice coil motor (VCM). The nonlinear convergence estimator is applied to simultaneously satisfy the requirements of fast convergence rate and small steady state frequency error, which are incompatible for linear convergence estimator. The convergence of the nonlinear algorithm is mathematically proofed, and its non-divergent characteristic is theoretically guaranteed. The vibration suppressing experiments demonstrate that the nonlinear ADUVA can accelerate the convergence rate of vibration suppressing and achieve more decrement of oscillation attenuation than the linear ADUVA.

  15. Impact of sampling strategy on stream load estimates in till landscape of the Midwest

    USGS Publications Warehouse

    Vidon, P.; Hubbard, L.E.; Soyeux, E.

    2009-01-01

    Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.

  16. Rounding Behavior in the Reporting of Headache Frequency Complicates Headache Chronification Research

    PubMed Central

    Houle, Timothy T.; Turner, Dana P.; Houle, Thomas A.; Smitherman, Todd A.; Martin, Vincent; Penzien, Donald B.; Lipton, Richard B.

    2013-01-01

    Objectives To characterize the extent of measurement error arising from rounding in headache frequency reporting (days per month) in a population sample of headache sufferers. Background When reporting numerical health information, individuals tend to round their estimates. The tendency to round to the nearest 5 days when reporting headache frequency can distort distributions and engender unreliability in frequency estimates in both clinical and research contexts. Methods This secondary analysis of the 2005 American Migraine Prevalence and Prevention study (AMPP) survey characterized the population distribution of 30-day headache frequency among community headache sufferers and determined the extent of numerical rounding (“heaping”) in self-reported data. Headache frequency distributions (days per month) were examined using a simplified version of Wang and Heitjan’s (2008) approach to heaping to estimate the probability that headache sufferers round to a multiple of 5 when providing frequency reports. Multiple imputation was used to estimate a theoretical “true” headache frequency. Results Of the 24,000 surveys, headache frequency data were available for 15,976 respondents diagnosed with migraine (68.6%), probable migraine (8.3%), or episodic tension-type headache (10.0%); the remainder had other headache types. The mean number of headaches days/month was 3.7 (SD = 5.6). Examination of the distribution of headache frequency reports revealed a disproportionate number of responses centered on multiples of 5 days. The odds that headache frequency was rounded to 5 increased by 24% with each one-day increase in headache frequency (OR: 1.24, 95% CI: 1.23 to 1.25), indicating that heaping occurs most commonly at higher headache frequencies. Women were more likely to round than men, and rounding decreased with increasing age and increased with symptoms of depression. Conclusions Because of the coarsening induced by rounding, caution should be used when distinguishing between episodic and chronic headache sufferers using self-reported estimates of headache frequency. Unreliability in frequency estimates is of particular concern among individuals with high-frequency (chronic) headache. Employing shorter recall intervals when assessing headache frequency, preferably using daily diaries, may improve accuracy and allow more precise estimation of chronic migraine onset and remission. PMID:23721238

  17. Evaluation of probe-induced flow distortion of Campbell CSAT3 sonic anemometers by numerical simulation

    NASA Astrophysics Data System (ADS)

    Mauder, M.; Huq, S.; De Roo, F.; Foken, T.; Manhart, M.; Schmid, H. P. E.

    2017-12-01

    The Campbell CSAT3 sonic anemometer is one of the most widely used instruments for eddy-covariance measurement. However, conflicting estimates for the probe-induced flow distortion error of this instrument have been reported recently, and those error estimates range between 3% and 14% for the measurement of vertical velocity fluctuations. This large discrepancy between the different studies can probably be attributed to the different experimental approaches applied. In order to overcome the limitations of both field intercomparison experiments and wind tunnel experiments, we propose a new approach that relies on virtual measurements in a large-eddy simulation (LES) environment. In our experimental set-up, we generate horizontal and vertical velocity fluctuations at frequencies that typically dominate the turbulence spectra of the surface layer. The probe-induced flow distortion error of a CSAT3 is then quantified by this numerical wind tunnel approach while the statistics of the prescribed inflow signal are taken as reference or etalon. The resulting relative error is found to range from 3% to 7% and from 1% to 3% for the standard deviation of the vertical and the horizontal velocity component, respectively, depending on the orientation of the CSAT3 in the flow field. We further demonstrate that these errors are independent of the frequency of fluctuations at the inflow of the simulation. The analytical corrections proposed by Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol, 155, 371-395, 2015) are compared against our simulated results, and we find that they indeed reduce the error by up to three percentage points. However, these corrections fail to reproduce the azimuth-dependence of the error that we observe. Moreover, we investigate the general Reynolds number dependence of the flow distortion error by more detailed idealized simulations.

  18. [A method of measuring presampled modulation transfer function using a rationalized approximation of geometrical edge slope].

    PubMed

    Honda, Michitaka

    2014-04-01

    Several improvements were implemented in the edge method of presampled modulation transfer function measurements (MTFs). The estimation technique for edge angle was newly developed by applying an algorithm for principal components analysis. The error in the estimation was statistically confirmed to be less than 0.01 even in the presence of quantum noise. Secondly, the geometrical edge slope was approximated using a rationalized number, making it possible to obtain an oversampled edge response function (ESF) with equal intervals. Thirdly, the final MTFs were estimated using the average of multiple MTFs calculated for local areas. This averaging operation eliminates the errors caused by the rationalized approximation. Computer-simulated images were used to evaluate the accuracy of our method. The relative error between the estimated MTF and the theoretical MTF at the Nyquist frequency was less than 0.5% when the MTF was expressed as a sinc function. For MTFs representing an indirect detector and phase-contrast detector, good agreement was also observed for the estimated MTFs for each. The high accuracy of the MTF estimation was also confirmed, even for edge angles of around 10 degrees, which suggests the potential for simplification of the measurement conditions. The proposed method could be incorporated into an automated measurement technique using a software application.

  19. Single Event Effects (SEE) Testing of Embedded DSP Cores within Microsemi RTAX4000D Field Programmable Gate Array (FPGA) Devices

    NASA Technical Reports Server (NTRS)

    Perez, Christopher E.; Berg, Melanie D.; Friendlich, Mark R.

    2011-01-01

    Motivation for this work is: (1) Accurately characterize digital signal processor (DSP) core single-event effect (SEE) behavior (2) Test DSP cores across a large frequency range and across various input conditions (3) Isolate SEE analysis to DSP cores alone (4) Interpret SEE analysis in terms of single-event upsets (SEUs) and single-event transients (SETs) (5) Provide flight missions with accurate estimate of DSP core error rates and error signatures.

  20. Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels

    NASA Astrophysics Data System (ADS)

    Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.

    2016-06-01

    We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.

  1. Forecasting space weather over short horizons: Revised and updated estimates

    NASA Astrophysics Data System (ADS)

    Reikard, Gordon

    2018-07-01

    Space weather reflects multiple causes. There is a clear influence for the sun on the near-earth environment. Solar activity shows evidence of chaotic properties, implying that prediction may be limited beyond short horizons. At the same time, geomagnetic activity also reflects the rotation of the earth's core, and local currents in the ionosphere. The combination of influences means that geomagnetic indexes behave like multifractals, exhibiting nonlinear variability, with intermittent outliers. This study tests a range of models: regressions, neural networks, and a frequency domain algorithm. Forecasting tests are run for sunspots and irradiance from 1820 onward, for the Aa geomagnetic index from 1868 onward, and the Am index from 1959 onward, over horizons of 1-7 days. For irradiance and sunspots, persistence actually does better over short horizons. None of the other models really dominate. For the geomagnetic indexes, the persistence method does badly, while the neural net also shows large errors. The remaining models all achieve about the same level of accuracy. The errors are in the range of 48% at 1 day, and 54% at all later horizons. Additional tests are run over horizons of 1-4 weeks. At 1 week, the best models reduce the error to about 35%. Over horizons of four weeks, the model errors increase. The findings are somewhat pessimistic. Over short horizons, geomagnetic activity exhibits so much random variation that the forecast errors are extremely high. Over slightly longer horizons, there is some improvement from estimating in the frequency domain, but not a great deal. Including solar activity in the models does not yield any improvement in accuracy.

  2. Inversion of multi-frequency electromagnetic induction data for 3D characterization of hydraulic conductivity

    USGS Publications Warehouse

    Brosten, Troy R.; Day-Lewis, Frederick D.; Schultz, Gregory M.; Curtis, Gary P.; Lane, John W.

    2011-01-01

    Electromagnetic induction (EMI) instruments provide rapid, noninvasive, and spatially dense data for characterization of soil and groundwater properties. Data from multi-frequency EMI tools can be inverted to provide quantitative electrical conductivity estimates as a function of depth. In this study, multi-frequency EMI data collected across an abandoned uranium mill site near Naturita, Colorado, USA, are inverted to produce vertical distribution of electrical conductivity (EC) across the site. The relation between measured apparent electrical conductivity (ECa) and hydraulic conductivity (K) is weak (correlation coefficient of 0.20), whereas the correlation between the depth dependent EC obtained from the inversions, and K is sufficiently strong to be used for hydrologic estimation (correlation coefficient of − 0.62). Depth-specific EC values were correlated with co-located K measurements to develop a site-specific ln(EC)–ln(K) relation. This petrophysical relation was applied to produce a spatially detailed map of K across the study area. A synthetic example based on ECa values at the site was used to assess model resolution and correlation loss given variations in depth and/or measurement error. Results from synthetic modeling indicate that optimum correlation with K occurs at ~ 0.5 m followed by a gradual correlation loss of 90% at 2.3 m. These results are consistent with an analysis of depth of investigation (DOI) given the range of frequencies, transmitter–receiver separation, and measurement errors for the field data. DOIs were estimated at 2.0 ± 0.5 m depending on the soil conductivities. A 4-layer model, with varying thicknesses, was used to invert the ECa to maximize available information within the aquifer region for improved correlations with K. Results show improved correlation between K and the corresponding inverted EC at similar depths, underscoring the importance of inversion in using multi-frequency EMI data for hydrologic estimation.

  3. Inversion of multi-frequency electromagnetic induction data for 3D characterization of hydraulic conductivity

    USGS Publications Warehouse

    Brosten, T.R.; Day-Lewis, F. D.; Schultz, G.M.; Curtis, G.P.; Lane, J.W.

    2011-01-01

    Electromagnetic induction (EMI) instruments provide rapid, noninvasive, and spatially dense data for characterization of soil and groundwater properties. Data from multi-frequency EMI tools can be inverted to provide quantitative electrical conductivity estimates as a function of depth. In this study, multi-frequency EMI data collected across an abandoned uranium mill site near Naturita, Colorado, USA, are inverted to produce vertical distribution of electrical conductivity (EC) across the site. The relation between measured apparent electrical conductivity (ECa) and hydraulic conductivity (K) is weak (correlation coefficient of 0.20), whereas the correlation between the depth dependent EC obtained from the inversions, and K is sufficiently strong to be used for hydrologic estimation (correlation coefficient of -0.62). Depth-specific EC values were correlated with co-located K measurements to develop a site-specific ln(EC)-ln(K) relation. This petrophysical relation was applied to produce a spatially detailed map of K across the study area. A synthetic example based on ECa values at the site was used to assess model resolution and correlation loss given variations in depth and/or measurement error. Results from synthetic modeling indicate that optimum correlation with K occurs at ~0.5m followed by a gradual correlation loss of 90% at 2.3m. These results are consistent with an analysis of depth of investigation (DOI) given the range of frequencies, transmitter-receiver separation, and measurement errors for the field data. DOIs were estimated at 2.0??0.5m depending on the soil conductivities. A 4-layer model, with varying thicknesses, was used to invert the ECa to maximize available information within the aquifer region for improved correlations with K. Results show improved correlation between K and the corresponding inverted EC at similar depths, underscoring the importance of inversion in using multi-frequency EMI data for hydrologic estimation. ?? 2011.

  4. Soil permittivity response to bulk electrical conductivity for selected soil water sensors

    USDA-ARS?s Scientific Manuscript database

    Bulk electrical conductivity can dominate the low frequency dielectric loss spectrum in soils, masking changes in the real permittivity and causing errors in estimated water content. We examined the dependence of measured apparent permittivity (Ka) on bulk electrical conductivity in contrasting soil...

  5. Estimation of chromatic errors from broadband images for high contrast imaging: sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2016-01-01

    Many concepts have been proposed to enable direct imaging of planets around nearby stars, and which would enable spectroscopic observations of their atmospheric observations and the potential discovery of biomarkers. The main technical challenge associated with direct imaging of exoplanets is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. Usage of an internal coronagraph with an adaptive optical system for wavefront correction is one of the most mature methods and is being developed as an instrument addition to the WFIRST-AFTA space mission. In addition, such instruments as GPI and SPHERE are already being used on the ground and are yielding spectra of giant planets. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, mid-spatial frequency wavefront errors must be estimated. To date, most broadband lab demonstrations use narrowband filters to obtain an estimate of the the chromaticity of the wavefront error and this can result in usage of a large percentage of the total integration time. Previously, we have proposed a method to estimate the chromaticity of wavefront errors using only broadband images; we have demonstrated that under idealized conditions wavefront errors can be estimated from images composed of discrete wavelengths. This is achieved by using DM probes with sufficient spatially-localized chromatic diversity. Here we report on the results of a study of the performance of this method with respect to realistic broadband images including noise. Additionally, we study optimal probe patterns that enable reduction of the number of probes used and compare the integration time with narrowband and IFS estimation methods.

  6. Voluntary EMG-to-force estimation with a multi-scale physiological muscle model

    PubMed Central

    2013-01-01

    Background EMG-to-force estimation based on muscle models, for voluntary contraction has many applications in human motion analysis. The so-called Hill model is recognized as a standard model for this practical use. However, it is a phenomenological model whereby muscle activation, force-length and force-velocity properties are considered independently. Perreault reported Hill modeling errors were large for different firing frequencies, level of activation and speed of contraction. It may be due to the lack of coupling between activation and force-velocity properties. In this paper, we discuss EMG-force estimation with a multi-scale physiology based model, which has a link to underlying crossbridge dynamics. Differently from the Hill model, the proposed method provides dual dynamics of recruitment and calcium activation. Methods The ankle torque was measured for the plantar flexion along with EMG measurements of the medial gastrocnemius (GAS) and soleus (SOL). In addition to Hill representation of the passive elements, three models of the contractile parts have been compared. Using common EMG signals during isometric contraction in four able-bodied subjects, torque was estimated by the linear Hill model, the nonlinear Hill model and the multi-scale physiological model that refers to Huxley theory. The comparison was made in normalized scale versus the case in maximum voluntary contraction. Results The estimation results obtained with the multi-scale model showed the best performances both in fast-short and slow-long term contraction in randomized tests for all the four subjects. The RMS errors were improved with the nonlinear Hill model compared to linear Hill, however it showed limitations to account for the different speed of contractions. Average error was 16.9% with the linear Hill model, 9.3% with the modified Hill model. In contrast, the error in the multi-scale model was 6.1% while maintaining a uniform estimation performance in both fast and slow contractions schemes. Conclusions We introduced a novel approach that allows EMG-force estimation based on a multi-scale physiology model integrating Hill approach for the passive elements and microscopic cross-bridge representations for the contractile element. The experimental evaluation highlights estimation improvements especially a larger range of contraction conditions with integration of the neural activation frequency property and force-velocity relationship through cross-bridge dynamics consideration. PMID:24007560

  7. Lognormal kriging for the assessment of reliability in groundwater quality control observation networks

    USGS Publications Warehouse

    Candela, L.; Olea, R.A.; Custodio, E.

    1988-01-01

    Groundwater quality observation networks are examples of discontinuous sampling on variables presenting spatial continuity and highly skewed frequency distributions. Anywhere in the aquifer, lognormal kriging provides estimates of the variable being sampled and a standard error of the estimate. The average and the maximum standard error within the network can be used to dynamically improve the network sampling efficiency or find a design able to assure a given reliability level. The approach does not require the formulation of any physical model for the aquifer or any actual sampling of hypothetical configurations. A case study is presented using the network monitoring salty water intrusion into the Llobregat delta confined aquifer, Barcelona, Spain. The variable chloride concentration used to trace the intrusion exhibits sudden changes within short distances which make the standard error fairly invariable to changes in sampling pattern and to substantial fluctuations in the number of wells. ?? 1988.

  8. Estimation of global snow cover using passive microwave data

    NASA Astrophysics Data System (ADS)

    Chang, Alfred T. C.; Kelly, Richard E.; Foster, James L.; Hall, Dorothy K.

    2003-04-01

    This paper describes an approach to estimate global snow cover using satellite passive microwave data. Snow cover is detected using the high frequency scattering signal from natural microwave radiation, which is observed by passive microwave instruments. Developed for the retrieval of global snow depth and snow water equivalent using Advanced Microwave Scanning Radiometer EOS (AMSR-E), the algorithm uses passive microwave radiation along with a microwave emission model and a snow grain growth model to estimate snow depth. The microwave emission model is based on the Dense Media Radiative Transfer (DMRT) model that uses the quasi-crystalline approach and sticky particle theory to predict the brightness temperature from a single layered snowpack. The grain growth model is a generic single layer model based on an empirical approach to predict snow grain size evolution with time. Gridding to the 25 km EASE-grid projection, a daily record of Special Sensor Microwave Imager (SSM/I) snow depth estimates was generated for December 2000 to March 2001. The estimates are tested using ground measurements from two continental-scale river catchments (Nelson River and the Ob River in Russia). This regional-scale testing of the algorithm shows that for passive microwave estimates, the average daily snow depth retrieval standard error between estimated and measured snow depths ranges from 0 cm to 40 cm of point observations. Bias characteristics are different for each basin. A fraction of the error is related to uncertainties about the grain growth initialization states and uncertainties about grain size changes through the winter season that directly affect the parameterization of the snow depth estimation in the DMRT model. Also, the algorithm does not include a correction for forest cover and this effect is clearly observed in the retrieval. Finally, error is also related to scale differences between in situ ground measurements and area-integrated satellite estimates. With AMSR-E data, improvements to snow depth and water equivalent estimates are expected since AMSR-E will have twice the spatial resolution of the SSM/I and will be able to characterize better the subnivean snow environment from an expanded range of microwave frequencies.

  9. An Initial Assessment of the Surface Reference Technique Applied to Data from the Dual-Frequency Precipitation Radar (DPR) on the GPM Satellite

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Kim, Hyokyung; Liao, Liang; Jones, Jeffrey A.; Kwiatkowski, John M.

    2015-01-01

    It has long been recognized that path-integrated attenuation (PIA) can be used to improve precipitation estimates from high-frequency weather radar data. One approach that provides an estimate of this quantity from airborne or spaceborne radar data is the surface reference technique (SRT), which uses measurements of the surface cross section in the presence and absence of precipitation. Measurements from the dual-frequency precipitation radar (DPR) on the Global Precipitation Measurement (GPM) satellite afford the first opportunity to test the method for spaceborne radar data at Ka band as well as for the Ku-band-Ka-band combination. The study begins by reviewing the basis of the single- and dual-frequency SRT. As the performance of the method is closely tied to the behavior of the normalized radar cross section (NRCS or sigma(0)) of the surface, the statistics of sigma(0) derived from DPR measurements are given as a function of incidence angle and frequency for ocean and land backgrounds over a 1-month period. Several independent estimates of the PIA, formed by means of different surface reference datasets, can be used to test the consistency of the method since, in the absence of error, the estimates should be identical. Along with theoretical considerations, the comparisons provide an initial assessment of the performance of the single- and dual-frequency SRT for the DPR. The study finds that the dual-frequency SRT can provide improvement in the accuracy of path attenuation estimates relative to the single-frequency method, particularly at Ku band.

  10. An Autonomous Satellite Time Synchronization System Using Remotely Disciplined VC-OCXOs.

    PubMed

    Gu, Xiaobo; Chang, Qing; Glennon, Eamonn P; Xu, Baoda; Dempseter, Andrew G; Wang, Dun; Wu, Jiapeng

    2015-07-23

    An autonomous remote clock control system is proposed to provide time synchronization and frequency syntonization for satellite to satellite or ground to satellite time transfer, with the system comprising on-board voltage controlled oven controlled crystal oscillators (VC-OCXOs) that are disciplined to a remote master atomic clock or oscillator. The synchronization loop aims to provide autonomous operation over extended periods, be widely applicable to a variety of scenarios and robust. A new architecture comprising the use of frequency division duplex (FDD), synchronous time division (STDD) duplex and code division multiple access (CDMA) with a centralized topology is employed. This new design utilizes dual one-way ranging methods to precisely measure the clock error, adopts least square (LS) methods to predict the clock error and employs a third-order phase lock loop (PLL) to generate the voltage control signal. A general functional model for this system is proposed and the error sources and delays that affect the time synchronization are discussed. Related algorithms for estimating and correcting these errors are also proposed. The performance of the proposed system is simulated and guidance for selecting the clock is provided.

  11. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  12. Efficient error correction for next-generation sequencing of viral amplicons

    PubMed Central

    2012-01-01

    Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430

  13. Efficient error correction for next-generation sequencing of viral amplicons.

    PubMed

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  14. Forensic dental age estimation by measuring root dentin translucency area using a new digital technique.

    PubMed

    Acharya, Ashith B

    2014-05-01

    Dentin translucency measurement is an easy yet relatively accurate approach to postmortem age estimation. Translucency area represents a two-dimensional change and may reflect age variations better than length. Manually measuring area is challenging and this paper proposes a new digital method using commercially available computer hardware and software. Area and length were measured on 100 tooth sections (age range, 19-82 years) of 250 μm thickness. Regression analysis revealed lower standard error of estimate and higher correlation with age for length than for area (R = 0.62 vs. 0.60). However, test of regression formulae on a control sample (n = 33, 21-85 years) showed smaller mean absolute difference (8.3 vs. 8.8 years) and greater frequency of smaller errors (73% vs. 67% age estimates ≤ ± 10 years) for area than for length. These suggest that digital area measurements of root translucency may be used as an alternative to length in forensic age estimation. © 2014 American Academy of Forensic Sciences.

  15. RF signal detection by a tunable optoelectronic oscillator based on a PS-FBG.

    PubMed

    Shao, Yuchen; Han, Xiuyou; Li, Ming; Zhao, Mingshan

    2018-03-15

    Low-power radio frequency (RF) signal detection is highly desirable for many applications, ranging from wireless communication to radar systems. A tunable optoelectronic oscillator (OEO) based on a phase-shifted fiber Bragg grating for detecting low-power RF signals is proposed and experimentally demonstrated. When the frequency of the input RF signal is matched with the potential oscillation mode of the OEO, it is detected and amplified. The frequency of the RF signal under detection can be estimated simultaneously by scanning the wavelength of the laser source. The RF signals from 1.5 to 5 GHz as low as -91  dBm are detected with a gain of about 10 dB, and the frequency is estimated with an error of ±100  MHz. The performance of the OEO system for detecting an RF signal with different modulation rates is also investigated.

  16. Methods for estimating the magnitude and frequency of peak streamflows for unregulated streams in Oklahoma

    USGS Publications Warehouse

    Lewis, Jason M.

    2010-01-01

    Peak-streamflow regression equations were determined for estimating flows with exceedance probabilities from 50 to 0.2 percent for the state of Oklahoma. These regression equations incorporate basin characteristics to estimate peak-streamflow magnitude and frequency throughout the state by use of a generalized least squares regression analysis. The most statistically significant independent variables required to estimate peak-streamflow magnitude and frequency for unregulated streams in Oklahoma are contributing drainage area, mean-annual precipitation, and main-channel slope. The regression equations are applicable for watershed basins with drainage areas less than 2,510 square miles that are not affected by regulation. The resulting regression equations had a standard model error ranging from 31 to 46 percent. Annual-maximum peak flows observed at 231 streamflow-gaging stations through water year 2008 were used for the regression analysis. Gage peak-streamflow estimates were used from previous work unless 2008 gaging-station data were available, in which new peak-streamflow estimates were calculated. The U.S. Geological Survey StreamStats web application was used to obtain the independent variables required for the peak-streamflow regression equations. Limitations on the use of the regression equations and the reliability of regression estimates for natural unregulated streams are described. Log-Pearson Type III analysis information, basin and climate characteristics, and the peak-streamflow frequency estimates for the 231 gaging stations in and near Oklahoma are listed. Methodologies are presented to estimate peak streamflows at ungaged sites by using estimates from gaging stations on unregulated streams. For ungaged sites on urban streams and streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow magnitude and frequency.

  17. Effects of RF pulse profile and intra-voxel phase dispersion on MR fingerprinting with balanced SSFP readout.

    PubMed

    Chiu, Su-Chin; Lin, Te-Ming; Lin, Jyh-Miin; Chung, Hsiao-Wen; Ko, Cheng-Wen; Büchert, Martin; Bock, Michael

    2017-09-01

    To investigate possible errors in T1 and T2 quantification via MR fingerprinting with balanced steady-state free precession readout in the presence of intra-voxel phase dispersion and RF pulse profile imperfections, using computer simulations based on Bloch equations. A pulse sequence with TR changing in a Perlin noise pattern and a nearly sinusoidal pattern of flip angle following an initial 180-degree inversion pulse was employed. Gaussian distributions of off-resonance frequency were assumed for intra-voxel phase dispersion effects. Slice profiles of sinc-shaped RF pulses were computed to investigate flip angle profile influences. Following identification of the best fit between the acquisition signals and those established in the dictionary based on known parameters, estimation errors were reported. In vivo experiments were performed at 3T to examine the results. Slight intra-voxel phase dispersion with standard deviations from 1 to 3Hz resulted in prominent T2 under-estimations, particularly at large T2 values. T1 and off-resonance frequencies were relatively unaffected. Slice profile imperfections led to under-estimations of T1, which became greater as regional off-resonance frequencies increased, but could be corrected by including slice profile effects in the dictionary. Results from brain imaging experiments in vivo agreed with the simulation results qualitatively. MR fingerprinting using balanced SSFP readout in the presence of intra-voxel phase dispersion and imperfect slice profile leads to inaccuracies in quantitative estimations of the relaxation times. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Finite element simulation of light transfer in turbid media under structured illumination

    USDA-ARS?s Scientific Manuscript database

    Spatial-frequency domain (SFD) imaging technique allows to estimate the optical properties of biological tissues in a wide field of view. The technique is, however, prone to error in measurement because the two crucial assumptions used for deriving the analytical solution to diffusion approximation ...

  19. Influence of tree size, taxonomy, and edaphic conditions on heart rot in mixed-dipterocarp Bornean rainforests: implications for aboveground biomass estimates

    NASA Astrophysics Data System (ADS)

    Heineman, K. D.; Russo, S. E.; Baillie, I. C.; Mamit, J. D.; Chai, P. P.-K.; Chai, L.; Hindley, E. W.; Lau, B.-T.; Tan, S.; Ashton, P. S.

    2015-05-01

    Fungal decay of heartwood creates hollows and areas of reduced wood density within the stems of living trees known as heart rot. Although heart rot is acknowledged as a source of error in forest aboveground biomass estimates, there are few datasets available to evaluate the environmental controls over heart rot infection and severity in tropical forests. Using legacy and recent data from drilled, felled, and cored stems in mixed dipterocarp forests in Sarawak, Malaysian Borneo, we quantified the frequency and severity of heart rot, and used generalized linear mixed effect models to characterize the association of heart rot with tree size, wood density, taxonomy, and edaphic conditions. Heart rot was detected in 55% of felled stems > 30 cm DBH, while the detection frequency was lower for stems of the same size evaluated by non-destructive drilling (45%) and coring (23%) methods. Heart rot severity, defined as the percent stem volume lost in infected stems, ranged widely from 0.1-82.8%. Tree taxonomy explained the greatest proportion of variance in heart rot frequency and severity among the fixed and random effects evaluated in our models. Heart rot frequency, but not severity, increased sharply with tree diameter, ranging from 56% infection across all datasets in stems > 50 cm DBH to 11% in trees 10-30 cm DBH. The frequency and severity of heart rot increased significantly in soils with low pH and cation concentrations in topsoil, and heart rot was more common in tree species associated with dystrophic sandy soils than with nutrient-rich clays. When scaled to forest stands, the percent of stem biomass lost to heart rot varied significantly with soil properties, and we estimate that 7% of the forest biomass is in some stage of heart rot decay. This study demonstrates not only that heart rot is a significant source of error in forest carbon estimates, but also that it strongly covaries with soil resources, underscoring the need to account for edaphic variation in estimating carbon storage in tropical forests.

  20. Improving Estimates Of Phase Parameters When Amplitude Fluctuates

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.

    1989-01-01

    Adaptive inverse filter applied to incoming signal and noise. Time-varying inverse-filtering technique developed to improve digital estimate of phase of received carrier signal. Intended for use where received signal fluctuates in amplitude as well as in phase and signal tracked by digital phase-locked loop that keeps its phase error much smaller than 1 radian. Useful in navigation systems, reception of time- and frequency-standard signals, and possibly spread-spectrum communication systems.

  1. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. I. Semi-infinite slab approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM- FOM, Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein

    2014-11-15

    In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships betweenmore » heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.« less

  2. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  3. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  4. Methods for determining magnitude and frequency of floods in California, based on data through water year 2006

    USGS Publications Warehouse

    Gotvald, Anthony J.; Barth, Nancy A.; Veilleux, Andrea G.; Parrett, Charles

    2012-01-01

    Methods for estimating the magnitude and frequency of floods in California that are not substantially affected by regulation or diversions have been updated. Annual peak-flow data through water year 2006 were analyzed for 771 streamflow-gaging stations (streamgages) in California having 10 or more years of data. Flood-frequency estimates were computed for the streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Low-outlier and historic information were incorporated into the flood-frequency analysis, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low outliers. Special methods for fitting the distribution were developed for streamgages in the desert region in southeastern California. Additionally, basin characteristics for the streamgages were computed by using a geographical information system. Regional regression analysis, using generalized least squares regression, was used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins in California that are outside of the southeastern desert region. Flood-frequency estimates and basin characteristics for 630 streamgages were combined to form the final database used in the regional regression analysis. Five hydrologic regions were developed for the area of California outside of the desert region. The final regional regression equations are functions of drainage area and mean annual precipitation for four of the five regions. In one region, the Sierra Nevada region, the final equations are functions of drainage area, mean basin elevation, and mean annual precipitation. Average standard errors of prediction for the regression equations in all five regions range from 42.7 to 161.9 percent. For the desert region of California, an analysis of 33 streamgages was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the log-Pearson Type III distribution. The regional estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final regional regression equations are functions of drainage area. Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent. Annual peak-flow data through water year 2006 were analyzed for eight streamgages in California having 10 or more years of data considered to be affected by urbanization. Flood-frequency estimates were computed for the urban streamgages by fitting a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Regression analysis could not be used to develop flood-frequency estimation equations for urban streams because of the limited number of sites. Flood-frequency estimates for the eight urban sites were graphically compared to flood-frequency estimates for 630 non-urban sites. The regression equations developed from this study will be incorporated into the U.S. Geological Survey (USGS) StreamStats program. The StreamStats program is a Web-based application that provides streamflow statistics and basin characteristics for USGS streamgages and ungaged sites of interest. StreamStats can also compute basin characteristics and provide estimates of streamflow statistics for ungaged sites when users select the location of a site along any stream in California.

  5. Food composition database development for between country comparisons.

    PubMed

    Merchant, Anwar T; Dehghan, Mahshid

    2006-01-19

    Nutritional assessment by diet analysis is a two-stepped process consisting of evaluation of food consumption, and conversion of food into nutrient intake by using a food composition database, which lists the mean nutritional values for a given food portion. Most reports in the literature focus on minimizing errors in estimation of food consumption but the selection of a specific food composition table used in nutrient estimation is also a source of errors. We are conducting a large prospective study internationally and need to compare diet, assessed by food frequency questionnaires, in a comparable manner between different countries. We have prepared a multi-country food composition database for nutrient estimation in all the countries participating in our study. The nutrient database is primarily based on the USDA food composition database, modified appropriately with reference to local food composition tables, and supplemented with recipes of locally eaten mixed dishes. By doing so we have ensured that the units of measurement, method of selection of foods for testing, and assays used for nutrient estimation are consistent and as current as possible, and yet have taken into account some local variations. Using this common metric for nutrient assessment will reduce differential errors in nutrient estimation and improve the validity of between-country comparisons.

  6. Suspected time errors along the satellite laser ranging network and impact on the reference frame

    NASA Astrophysics Data System (ADS)

    Belli, Alexandre; Exertier, Pierre; Lemoine, Frank; Zelensky, Nikita

    2017-04-01

    Systematic errors in the laser ranging technologies must be considered when considering the GGOS objective to maintain a network with an accuracy of 1 mm and a stability of 0.1 mm per year for the station ground coordinates in the ITRF. Range and Time biases are identified to be part of these systematic errors, for a major part, and are difficult to detect. Concerning the range bias, analysts and working groups estimate their values from LAGEOS-1 & 2 observations (c.f. Appleby et al. 2016). On the other hand, time errors are often neglected (they are presumed to be < 100 ns) and remain difficult to estimate (at this level), from using the observations of geodetic satellites passes and precise orbit determination (i.e. LAGEOS). The Time Transfer by Laser Link (T2L2) experiment on-board Jason-2 is a unique opportunity to determine, globally and independently, the synchronization of all laser stations. Because of the low altitude of Jason-2, we computed the time transfer in non-common view from the Grasse primary station to all other SLR stations. We used a method to synchronize the whole network which consists of the integration of an Ultra Stable Oscillator (USO) frequency model, in order to take care of the frequency instabilities caused by the space environment. The integration provides a model which becomes an "on-orbit" time realization which can be connected to each of the SLR stations by the ground to space laser link. We estimated time biases per station, with a repeatability of 3 - 4 ns, for 25 stations which observe T2L2 regularly. We investigated the effect on LAGEOS and Starlette orbits and we discuss the impact of time errors on the station coordinates. We show that the effects on the global POD are negligible (< 1 mm) but are at the level of 4 - 6 mm for the coordinates. We conclude and propose to introduce time errors in the future analyses (IDS and ILRS) that would lead to the computation of improved reference frame solutions.

  7. Peak-flow characteristics of Wyoming streams

    USGS Publications Warehouse

    Miller, Kirk A.

    2003-01-01

    Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.

  8. Accounting for genotype uncertainty in the estimation of allele frequencies in autopolyploids.

    PubMed

    Blischak, Paul D; Kubatko, Laura S; Wolfe, Andrea D

    2016-05-01

    Despite the increasing opportunity to collect large-scale data sets for population genomic analyses, the use of high-throughput sequencing to study populations of polyploids has seen little application. This is due in large part to problems associated with determining allele copy number in the genotypes of polyploid individuals (allelic dosage uncertainty-ADU), which complicates the calculation of important quantities such as allele frequencies. Here, we describe a statistical model to estimate biallelic SNP frequencies in a population of autopolyploids using high-throughput sequencing data in the form of read counts. We bridge the gap from data collection (using restriction enzyme based techniques [e.g. GBS, RADseq]) to allele frequency estimation in a unified inferential framework using a hierarchical Bayesian model to sum over genotype uncertainty. Simulated data sets were generated under various conditions for tetraploid, hexaploid and octoploid populations to evaluate the model's performance and to help guide the collection of empirical data. We also provide an implementation of our model in the R package polyfreqs and demonstrate its use with two example analyses that investigate (i) levels of expected and observed heterozygosity and (ii) model adequacy. Our simulations show that the number of individuals sampled from a population has a greater impact on estimation error than sequencing coverage. The example analyses also show that our model and software can be used to make inferences beyond the estimation of allele frequencies for autopolyploids by providing assessments of model adequacy and estimates of heterozygosity. © 2015 John Wiley & Sons Ltd.

  9. Accuracy of a continuous glucose monitoring system in dogs and cats with diabetic ketoacidosis.

    PubMed

    Reineke, Erica L; Fletcher, Daniel J; King, Lesley G; Drobatz, Kenneth J

    2010-06-01

    (1) To determine the ability of a continuous interstitial glucose monitoring system (CGMS) to accurately estimate blood glucose (BG) in dogs and cats with diabetic ketoacidosis. (2) To determine the effect of perfusion, hydration, body condition score, severity of ketosis, and frequency of calibration on the accuracy of the CGMS. Prospective study. University Teaching Hospital. Thirteen dogs and 11 cats diagnosed with diabetic ketoacidosis were enrolled in the study within 24 hours of presentation. Once BG dropped below 22.2 mmol/L (400 mg/dL), a sterile flexible glucose sensor was placed aseptically in the interstitial space and attached to the continuous glucose monitoring device for estimation of the interstitial glucose every 5 minutes. BG measurements were taken with a portable BG meter every 2-4 hours at the discretion of the primary clinician and compared with CGMS glucose measurements. The CGMS estimates of BG and BG measured on the glucometer were strongly associated regardless of calibration frequency (calibration every 8 h: r=0.86, P<0.001; calibration every 12 h: r=0.85, P<0.001). Evaluation of this data using both the Clarke and Consensus error grids showed that 96.7% and 99% of the CGMS readings, respectively, were deemed clinically acceptable (Zones A and B errors). Interpatient variability in the accuracy of the CGMS glucose measurements was found but was not associated with body condition, perfusion, or degree of ketosis. A weak association between hydration status of the patient as assessed with the visual analog scale and absolute percent error (Spearman's rank correlation, rho=-0.079, 95% CI=-0.15 to -0.01, P=0.03) was found, with the device being more accurate in the more hydrated patients. The CGMS provides clinically accurate estimates of BG in patients with diabetic ketoacidosis.

  10. Analysis of Relationships between the Level of Errors in Leg and Monofin Movement and Stroke Parameters in Monofin Swimming

    PubMed Central

    Rejman, Marek

    2013-01-01

    The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key points The monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers. The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed. Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed. The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated. PMID:24149742

  11. Experimental demonstration of a 16.9 Gb/s link for coherent OFDM PON robust to frequency offset and timing error

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Liu, Yu; Xiang, Yuanjiang

    2018-07-01

    Due to its merits of flexible bandwidth allocation and robustness towards fiber transmission impairments, coherent optical orthogonal frequency division multiplexing (CO-OFDM) technology draws a lot of attention for passive optical networks (PON). However, a CO-OFDM system is vulnerable to frequency offsets between modulated optical signals and optical local oscillators (OLO). This is particularly serious for low cost PONs where low cost lasers are used. Thus, it is of great interest to develop efficient algorithms for frequency synchronization in CO-OFDM systems. Usually frequency synchronization proposed in CO-OFDM systems are done by detecting the phase shift in time domain. In such a way, there is a trade-off between estimation accuracy and range. Considering that the integer frequency offset (IFO) contributes to the major frequency offset, a more efficient method to estimate IFO is of demand. By detecting IFO induced circular channel rotation (CCR), the frequency offset can be directly estimated after fast Fourier transforming (FFT). In this paper, circular acquisition offset frequency and timing synchronization (CAO-FTS) scheme is proposed. A specially-designed frequency domain pseudo noise (PN) sequence is used for CCR detection and timing synchronization. Full-range frequency offset compensation and non-plateau timing synchronization are experimentally demonstrated in presence of fiber dispersion. Based on CAO-FTS, 16.9 Gb/s CO-OFDM signal is successfully delivered over a span of 80-km single mode fiber.

  12. Fast and accurate read-out of interferometric optical fiber sensors

    NASA Astrophysics Data System (ADS)

    Bartholsen, Ingebrigt; Hjelme, Dag R.

    2016-03-01

    We present results from an evaluation of phase and frequency estimation algorithms for read-out instrumentation of interferometric sensors. Tests on interrogating a micro Fabry-Perot sensor made of semi-spherical stimuli-responsive hydrogel immobilized on a single mode fiber end face, shows that an iterative quadrature demodulation technique (IQDT) implemented on a 32-bit microcontroller unit can achieve an absolute length accuracy of ±50 nm and length change accuracy of ±3 nm using an 80 nm SLED source and a grating spectrometer for interrogation. The mean absolute error for the frequency estimator is a factor 3 larger than the theoretical lower bound for a maximum likelihood estimator. The corresponding factor for the phase estimator is 1.3. The computation time for the IQDT algorithm is reduced by a factor 1000 compared to the full QDT for the same accuracy requirement.

  13. Channel estimation based on quantized MMP for FDD massive MIMO downlink

    NASA Astrophysics Data System (ADS)

    Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie

    2016-10-01

    In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.

  14. A Bayesian model for estimating multi-state disease progression.

    PubMed

    Shen, Shiwen; Han, Simon X; Petousis, Panayiotis; Weiss, Robert E; Meng, Frank; Bui, Alex A T; Hsu, William

    2017-02-01

    A growing number of individuals who are considered at high risk of cancer are now routinely undergoing population screening. However, noted harms such as radiation exposure, overdiagnosis, and overtreatment underscore the need for better temporal models that predict who should be screened and at what frequency. The mean sojourn time (MST), an average duration period when a tumor can be detected by imaging but with no observable clinical symptoms, is a critical variable for formulating screening policy. Estimation of MST has been long studied using continuous Markov model (CMM) with Maximum likelihood estimation (MLE). However, a lot of traditional methods assume no observation error of the imaging data, which is unlikely and can bias the estimation of the MST. In addition, the MLE may not be stably estimated when data is sparse. Addressing these shortcomings, we present a probabilistic modeling approach for periodic cancer screening data. We first model the cancer state transition using a three state CMM model, while simultaneously considering observation error. We then jointly estimate the MST and observation error within a Bayesian framework. We also consider the inclusion of covariates to estimate individualized rates of disease progression. Our approach is demonstrated on participants who underwent chest x-ray screening in the National Lung Screening Trial (NLST) and validated using posterior predictive p-values and Pearson's chi-square test. Our model demonstrates more accurate and sensible estimates of MST in comparison to MLE. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Error Estimation and Compensation in Reduced Dynamic Models of Large Space Structures

    DTIC Science & Technology

    1987-04-23

    PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (if aplicable ) AFWAL I FIBRA F33615-84-C-3219 8c. ADDRESS (City, Stateand ZIP Code) ?0 SOURCE...10 Modes of the Full Model 15 5 Comparison of Various Reduced Models 18 6 Driving Point Mobilities , Wing Tip (Z55) 19 7 Driving Point Mobilities , Wing...Root Trailing Edge (Z19) 20 8 AMI Improvement 23 9 Frequency Domain Solution, Driving Point Mobilities , Wing Tip (Z55), RM1I 25 10 Frequency Domain

  16. Estimation of Flood Discharges at Selected Recurrence Intervals for Streams in New Hampshire

    USGS Publications Warehouse

    Olson, Scott A.

    2009-01-01

    This report provides estimates of flood discharges at selected recurrence intervals for streamgages in and adjacent to New Hampshire and equations for estimating flood discharges at recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, and 500-years for ungaged, unregulated, rural streams in New Hampshire. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 117 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, mean April precipitation, percentage of wetland area, and main channel slope. The average standard error of prediction for estimating the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence interval flood discharges with these equations are 30.0, 30.8, 32.0, 34.2, 36.0, 38.1, and 43.4 percent, respectively. Flood discharges at selected recurrence intervals for selected streamgages were computed following the guidelines in Bulletin 17B of the U.S. Interagency Advisory Committee on Water Data. To determine the flood-discharge exceedence probabilities at streamgages in New Hampshire, a new generalized skew coefficient map covering the State was developed. The standard error of the data on new map is 0.298. To improve estimates of flood discharges at selected recurrence intervals for 20 streamgages with short-term records (10 to 15 years), record extension using the two-station comparison technique was applied. The two-station comparison method uses data from a streamgage with long-term record to adjust the frequency characteristics at a streamgage with a short-term record. A technique for adjusting a flood-discharge frequency curve computed from a streamgage record with results from the regression equations is described in this report. Also, a technique is described for estimating flood discharge at a selected recurrence interval for an ungaged site upstream or downstream from a streamgage using a drainage-area adjustment. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.

  17. Microwave-photonics direction finding system for interception of low probability of intercept radio frequency signals

    NASA Astrophysics Data System (ADS)

    Pace, Phillip Eric; Tan, Chew Kung; Ong, Chee K.

    2018-02-01

    Direction finding (DF) systems are fundamental electronic support measures for electronic warfare. A number of DF techniques have been developed over the years; however, these systems are limited in bandwidth and resolution and suffer from a complex design for frequency downconversion. The design of a photonic DF technique for the detection and DF of low probability of intercept (LPI) signals is investigated. Key advantages of this design include a small baseline, wide bandwidth, high resolution, minimal space, weight, and power requirement. A robust postprocessing algorithm that utilizes the minimum Euclidean distance detector provides consistence and accurate estimation of angle of arrival (AoA) for a wide range of LPI waveforms. Experimental tests using frequency modulation continuous wave (FMCW) and P4 modulation signals were conducted in an anechoic chamber to verify the system design. Test results showed that the photonic DF system is capable of measuring the AoA of the LPI signals with 1-deg resolution over a 180 deg field-of-view. For an FMCW signal, the AoA was determined with a RMS error of 0.29 deg at 1-deg resolution. For a P4 coded signal, the RMS error in estimating the AoA is 0.32 deg at 1-deg resolution.

  18. Aircraft Fault Detection Using Real-Time Frequency Response Estimation

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2016-01-01

    A real-time method for estimating time-varying aircraft frequency responses from input and output measurements was demonstrated. The Bat-4 subscale airplane was used with NASA Langley Research Center's AirSTAR unmanned aerial flight test facility to conduct flight tests and collect data for dynamic modeling. Orthogonal phase-optimized multisine inputs, summed with pilot stick and pedal inputs, were used to excite the responses. The aircraft was tested in its normal configuration and with emulated failures, which included a stuck left ruddervator and an increased command path latency. No prior knowledge of a dynamic model was used or available for the estimation. The longitudinal short period dynamics were investigated in this work. Time-varying frequency responses and stability margins were tracked well using a 20 second sliding window of data, as compared to a post-flight analysis using output error parameter estimation and a low-order equivalent system model. This method could be used in a real-time fault detection system, or for other applications of dynamic modeling such as real-time verification of stability margins during envelope expansion tests.

  19. Investigation of spectral analysis techniques for randomly sampled velocimetry data

    NASA Technical Reports Server (NTRS)

    Sree, Dave

    1993-01-01

    It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable prefiltering technique. But, this increased bandwidth comes at the cost of the lower frequency estimates. The studies further showed that large data sets of the order of 100,000 points, or more, high data rates, and Poisson sampling are very crucial for obtaining reliable spectral estimates from randomly sampled data, such as LV data. Some of the results of the current study are presented.

  20. Estimation of regression laws for ground motion parameters using as case of study the Amatrice earthquake

    NASA Astrophysics Data System (ADS)

    Tiberi, Lara; Costa, Giovanni

    2017-04-01

    The possibility to directly associate the damages to the ground motion parameters is always a great challenge, in particular for civil protections. Indeed a ground motion parameter, estimated in near real time that can express the damages occurred after an earthquake, is fundamental to arrange the first assistance after an event. The aim of this work is to contribute to the estimation of the ground motion parameter that better describes the observed intensity, immediately after an event. This can be done calculating for each ground motion parameter estimated in a near real time mode a regression law which correlates the above-mentioned parameter to the observed macro-seismic intensity. This estimation is done collecting high quality accelerometric data in near field, filtering them at different frequency steps. The regression laws are calculated using two different techniques: the non linear least-squares (NLLS) Marquardt-Levenberg algorithm and the orthogonal distance methodology (ODR). The limits of the first methodology are the needed of initial values for the parameters a and b (set 1.0 in this study), and the constraint that the independent variable must be known with greater accuracy than the dependent variable. While the second algorithm is based on the estimation of the errors perpendicular to the line, rather than just vertically. The vertical errors are just the errors in the 'y' direction, so only for the dependent variable whereas the perpendicular errors take into account errors for both the variables, the dependent and the independent. This makes possible also to directly invert the relation, so the a and b values can be used also to express the gmps as function of I. For each law the standard deviation and R2 value are estimated in order to test the quality and the reliability of the found relation. The Amatrice earthquake of 24th August of 2016 is used as case of study to test the goodness of the calculated regression laws.

  1. The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2013-07-21

    Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  3. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  4. Correlation techniques to determine model form in robust nonlinear system realization/identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1991-01-01

    The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  5. Parameter Estimation of Actuators for Benchmark Active Control Technology (BACT) Wind Tunnel Model with Analysis of Wear and Aerodynamic Loading Effects

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Fung, Jimmy

    1998-01-01

    This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.

  6. Polymerase matters: non-proofreading enzymes inflate fungal community richness estimates by up to 15 %

    Treesearch

    Alena K. Oliver; Shawn P. Brown; Mac A. Callaham; Ari Jumpponen

    2015-01-01

    Rare taxa overwhelm metabarcoding data generated using next-generation sequencing (NGS). Low frequency Operational Taxonomic Units (OTUs) may be artifacts generated by PCR-amplification errors resulting from polymerase mispairing. We analyzed two Internal Transcribed Spacer 2 (ITS2) MiSeq libraries generated with proofreading (ThermoScientific Phusion

  7. Evaluation of stem rot in 339 Bornean tree species: implications of size, taxonomy, and soil-related variation for aboveground biomass estimates

    NASA Astrophysics Data System (ADS)

    Heineman, K. D.; Russo, S. E.; Baillie, I. C.; Mamit, J. D.; Chai, P. P.-K.; Chai, L.; Hindley, E. W.; Lau, B.-T.; Tan, S.; Ashton, P. S.

    2015-10-01

    Fungal decay of heart wood creates hollows and areas of reduced wood density within the stems of living trees known as stem rot. Although stem rot is acknowledged as a source of error in forest aboveground biomass (AGB) estimates, there are few data sets available to evaluate the controls over stem rot infection and severity in tropical forests. Using legacy and recent data from 3180 drilled, felled, and cored stems in mixed dipterocarp forests in Sarawak, Malaysian Borneo, we quantified the frequency and severity of stem rot in a total of 339 tree species, and related variation in stem rot with tree size, wood density, taxonomy, and species' soil association, as well as edaphic conditions. Predicted stem rot frequency for a 50 cm tree was 53 % of felled, 39 % of drilled, and 28 % of cored stems, demonstrating differences among methods in rot detection ability. The percent stem volume infected by rot, or stem rot severity, ranged widely among trees with stem rot infection (0.1-82.8 %) and averaged 9 % across all trees felled. Tree taxonomy explained the greatest proportion of variance in both stem rot frequency and severity among the predictors evaluated in our models. Stem rot frequency, but not severity, increased sharply with tree diameter, ranging from 13 % in trees 10-30 cm DBH to 54 % in stems ≥ 50 cm DBH across all data sets. The frequency of stem rot increased significantly in soils with low pH and cation concentrations in topsoil, and stem rot was more common in tree species associated with dystrophic sandy soils than with nutrient-rich clays. When scaled to forest stands, the maximum percent of stem biomass lost to stem rot varied significantly with soil properties, and we estimate that stem rot reduces total forest AGB estimates by up to 7 % relative to what would be predicted assuming all stems are composed strictly of intact wood. This study demonstrates not only that stem rot is likely to be a significant source of error in forest AGB estimation, but also that it strongly covaries with tree size, taxonomy, habitat association, and soil resources, underscoring the need to account for tree community composition and edaphic variation in estimating carbon storage in tropical forests.

  8. Novel approaches to estimating the turbulent kinetic energy dissipation rate from low- and moderate-resolution velocity fluctuation time series

    NASA Astrophysics Data System (ADS)

    Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.

    2017-11-01

    In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.

  9. EEG Characteristic Extraction Method of Listening Music and Objective Estimation Method Based on Latency Structure Model in Individual Characteristics

    NASA Astrophysics Data System (ADS)

    Ito, Shin-Ichi; Mitsukura, Yasue; Nakamura Miyamura, Hiroko; Saito, Takafumi; Fukumi, Minoru

    EEG is characterized by the unique and individual characteristics. Little research has been done to take into account the individual characteristics when analyzing EEG signals. Often the EEG has frequency components which can describe most of the significant characteristics. Then there is the difference of importance between the analyzed frequency components of the EEG. We think that the importance difference shows the individual characteristics. In this paper, we propose a new EEG extraction method of characteristic vector by a latency structure model in individual characteristics (LSMIC). The LSMIC is the latency structure model, which has personal error as the individual characteristics, based on normal distribution. The real-coded genetic algorithms (RGA) are used for specifying the personal error that is unknown parameter. Moreover we propose an objective estimation method that plots the EEG characteristic vector on a visualization space. Finally, the performance of the proposed method is evaluated using a realistic simulation and applied to a real EEG data. The result of our experiment shows the effectiveness of the proposed method.

  10. Submillimeter, millimeter, and microwave spectral line catalogue, revision 3

    NASA Technical Reports Server (NTRS)

    Pickett, H. M.; Poynter, R. L.; Cohen, E. A.

    1992-01-01

    A computer-accessible catalog of submillimeter, millimeter, and microwave spectral lines in the frequency range between 0 and 10,000 GHz (i.e., wavelengths longer than 30 micrometers) is described. The catalog can be used as a planning or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, the lower state energy, and the quantum number assignment. This edition of the catalog has information on 206 atomic and molecular species and includes a total of 630,924 lines. The catalog was constructed by using theoretical least square fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalog will add more atoms and molecules and update the present listings as new data appear. The catalog is available as a magnetic data tape recorded in card images, with one card image per spectral line, from the National Space Science Data Center, located at Goddard Space Flight Center.

  11. Evaluation of modulation transfer function of optical lens system by support vector regression methodologies - A comparative study

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Saboohi, Hadi; Ang, Tan Fong; Anuar, Nor Badrul; Rahman, Zulkanain Abdul; Pavlović, Nenad T.

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the polynomial and radial basis function (RBF) are applied as the kernel function of Support Vector Regression (SVR) to estimate and predict estimate MTF value of the actual optical system according to experimental tests. Instead of minimizing the observed training error, SVR_poly and SVR_rbf attempt to minimize the generalization error bound so as to achieve generalized performance. The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the SVR_rbf approach in compare to SVR_poly soft computing methodology.

  12. Simulating a transmon implementation of the surface code, Part I

    NASA Astrophysics Data System (ADS)

    Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo

    Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  13. Errors and error rates in surgical pathology: an Association of Directors of Anatomic and Surgical Pathology survey.

    PubMed

    Cooper, Kumarasen

    2006-05-01

    This survey on errors in surgical pathology was commissioned by the Association of Directors of Anatomic and Surgical Pathology Council to explore broad perceptions and definitions of error in surgical pathology among its membership and to get some estimate of the perceived frequency of such errors. Overall, 41 laboratories were surveyed, with 34 responding to a confidential questionnaire. Six small, 13 medium, and 10 large laboratories (based on specimen volume), predominantly located in the United States, were surveyed (the remaining 5 laboratories did not provide this particular information). The survey questions, responses, and associated comments are presented. It is clear from this survey that we lack uniformity and consistency with respect to terminology, definitions, and the identification/documentation of errors in surgical pathology. An appeal is made for the urgent need to reach some consensus in order to address these discrepancies as we prepare to combat the issue of errors in surgical pathology.

  14. Estimating the magnitude and frequency of floods for streams in west-central Florida, 2001

    USGS Publications Warehouse

    Hammett, Kathleen M.; DelCharco, Michael J.

    2005-01-01

    Flood discharges were estimated for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years for 94 streamflow stations in west-central Florida. Most of the stations are located within the 10,000 square-mile, 16-county area that forms the Southwest Florida Water Management District. All stations had at least 10 years of homogeneous record, and none have flood discharges that are significantly affected by regulation or urbanization. Guidelines established by the U.S. Water Resources Council in Bulletin 17B were used to estimate flood discharges from gaging station records. Multiple linear regression analysis was then used to mathematically relate estimates of flood discharge for selected recurrence intervals to explanatory basin characteristics. Contributing drainage area, channel slope, and the percent of total drainage area covered by lakes (percent lake area) were the basin characteristics that provided the best regression estimates. The study area was subdivided into four geographic regions to further refine the regression equations. Region 1 at the northern end of the study area includes large rivers that are characteristic of the rolling karst terrain of northern Florida. Only a small part of Region 1 lies within the boundaries of the Southwest Florida Water Management District. Contributing drainage area and percent lake area were the most statistically significant basin characteristics in Region 1; the prediction error of the regression equations varied with the recurrence interval and ranged from 57 to 69 percent. In the three other regions of the study area, contributing drainage area, channel slope, and percent lake area were the most statistically significant basin characteristics, and are the three characteristics that can be used to best estimate the magnitude and frequency of floods on most streams within the Southwest Florida Water Management District. The Withlacoochee River Basin dominates Region 2; the prediction error of the regression models in the region ranged from 65 to 68 percent. The basins that drain into the northern part of Tampa Bay and the upper reaches of the Peace River Basin are in Region 3, which had prediction errors ranging from 54 to 74 percent. Region 4, at the southern end of the study area, had prediction errors that ranged from 40 to 56 percent. Estimates of flood discharge become more accurate as longer periods of record are used for analyses; results of this study should be used in lieu of results from earlier U.S. Geological Survey studies of flood magnitude and frequency in west-central Florida. A comparison of current results with earlier studies indicates that use of a longer period of record with additional high-water events produces substantially higher flood-discharge estimates for many gaging stations. Another comparison indicates that the use of a computed, generalized skew in a previous study in 1979 tended to overestimate flood discharges.

  15. High-Precision Attitude Estimation Method of Star Sensors and Gyro Based on Complementary Filter and Unscented Kalman Filter

    NASA Astrophysics Data System (ADS)

    Guo, C.; Tong, X.; Liu, S.; Liu, S.; Lu, X.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Determining the attitude of satellite at the time of imaging then establishing the mathematical relationship between image points and ground points is essential in high-resolution remote sensing image mapping. Star tracker is insensitive to the high frequency attitude variation due to the measure noise and satellite jitter, but the low frequency attitude motion can be determined with high accuracy. Gyro, as a short-term reference to the satellite's attitude, is sensitive to high frequency attitude change, but due to the existence of gyro drift and integral error, the attitude determination error increases with time. Based on the opposite noise frequency characteristics of two kinds of attitude sensors, this paper proposes an on-orbit attitude estimation method of star sensors and gyro based on Complementary Filter (CF) and Unscented Kalman Filter (UKF). In this study, the principle and implementation of the proposed method are described. First, gyro attitude quaternions are acquired based on the attitude kinematics equation. An attitude information fusion method is then introduced, which applies high-pass filtering and low-pass filtering to the gyro and star tracker, respectively. Second, the attitude fusion data based on CF are introduced as the observed values of UKF system in the process of measurement updating. The accuracy and effectiveness of the method are validated based on the simulated sensors attitude data. The obtained results indicate that the proposed method can suppress the gyro drift and measure noise of attitude sensors, improving the accuracy of the attitude determination significantly, comparing with the simulated on-orbit attitude and the attitude estimation results of the UKF defined by the same simulation parameters.

  16. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  17. Impact of calibration errors on CMB component separation using FastICA and ILC

    NASA Astrophysics Data System (ADS)

    Dick, Jason; Remazeilles, Mathieu; Delabrouille, Jacques

    2010-01-01

    The separation of emissions from different astrophysical processes is an important step towards the understanding of observational data. This topic of component separation is of particular importance in the observation of the relic cosmic microwave background (CMB) radiation, as performed by the Wilkinson Microwave Anisotropy Probe satellite and the more recent Planck mission, launched on 2009 May 14 from Kourou and currently taking data. When performing any sort of component separation, some assumptions about the components must be used. One assumption that many techniques typically use is knowledge of the frequency scaling of one or more components. This assumption may be broken in the presence of calibration errors. Here we compare, in the context of imperfect calibration, the recovery of a clean map of emission of the CMB from observational data with two methods: FastICA (which makes no assumption of the frequency scaling of the components) and an `Internal Linear Combination' (ILC), which explicitly extracts a component with a given frequency scaling. We find that even in the presence of small calibration errors (less than 1 per cent) with a Planck-style mission, the ILC method can lead to inaccurate CMB reconstruction in the high signal-to-noise ratio regime, because of partial cancellation of the CMB emission in the recovered map. While there is no indication that the failure of the ILC will translate to other foreground cleaning or component separation techniques, we propose that all methods which assume knowledge of the frequency scaling of one or more components be careful to estimate the effects of calibration errors.

  18. Validating and calibrating the Nintendo Wii balance board to derive reliable center of pressure measures.

    PubMed

    Leach, Julia M; Mancini, Martina; Peterka, Robert J; Hayes, Tamara L; Horak, Fay B

    2014-09-29

    The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the "gold standard" laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2-6 mm (before calibration) to 0.5-2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from -10.5% (before calibration) to -0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable.

  19. Validating and Calibrating the Nintendo Wii Balance Board to Derive Reliable Center of Pressure Measures

    PubMed Central

    Leach, Julia M.; Mancini, Martina; Peterka, Robert J.; Hayes, Tamara L.; Horak, Fay B.

    2014-01-01

    The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the “gold standard” laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2–6 mm (before calibration) to 0.5–2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from −10.5% (before calibration) to −0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable. PMID:25268919

  20. Note: Demodulation of spectral signal modulated by optical chopper with unstable modulation frequency.

    PubMed

    Zhang, Shengzhao; Li, Gang; Wang, Jiexi; Wang, Donggen; Han, Ying; Cao, Hui; Lin, Ling; Diao, Chunhong

    2017-10-01

    When an optical chopper is used to modulate the light source, the rotating speed of the wheel may vary with time and subsequently cause jitter of the modulation frequency. The amplitude calculated from the modulated signal would be distorted when the frequency fluctuations occur. To precisely calculate the amplitude of the modulated light flux, we proposed a method to estimate the range of the frequency fluctuation in the measurement of the spectrum and then extract the amplitude based on the sum of power of the signal in the selected frequency range. Experiments were designed to test the feasibility of the proposed method and the results showed lower root means square error than the conventional way.

  1. Application of the Hartmann–Tran profile to precise experimental data sets of 12C 2H 2

    DOE PAGES

    Forthomme, D.; Cich, M. J.; Twagirayezu, S.; ...

    2015-06-25

    Self- and nitrogen-broadened line shape data for the P e(11) line of the ν₁ + ν₃ band of acetylene, recorded using a frequency comb-stabilized laser spectrometer, have been analyzed using the Hartmann–Tran profile (HTP) line shape model in a multispectrum fitting. In total, the data included measurements recorded at temperatures between 125 K and 296 K and at pressures between 4 and 760 Torr. New, sub-Doppler, frequency comb-referenced measurements of the positions of multiple underlying hot band lines have also been made. These underlying lines significantly affect the P e(11) line profile at temperatures above 240 K and poorly knownmore » frequencies previously introduced errors into the line shape analyses. Thus, the behavior of the HTP model was compared to the quadratic speed dependent Voigt profile (QSDVP) expressed in the frequency and time domains. A parameter uncertainty analysis was carried out using a Monte Carlo method based on the estimated pressure, transmittance and frequency measurement errors. From the analyses, the P e(11) line strength was estimated to be 1.2014(50) × 10 -20 in cm.molecules⁻¹ units at 296 K with the standard deviation in parenthesis. For analyzing these data, we found that a reduced form of the HTP, equivalent to the QSDVP, was most appropriate because the additional parameters included in the full HTP were not well determined. As a supplement to this work, expressions for analytic derivatives and a lineshape fitting code written in Matlab for the HTP are available.« less

  2. Application of the Hartmann-Tran profile to precise experimental data sets of 12C2H2

    NASA Astrophysics Data System (ADS)

    Forthomme, D.; Cich, M. J.; Twagirayezu, S.; Hall, G. E.; Sears, T. J.

    2015-11-01

    Self- and nitrogen-broadened line shape data for the Pe(11) line of the ν1 +ν3 band of acetylene, recorded using a frequency comb-stabilized laser spectrometer, have been analyzed using the Hartmann-Tran profile (HTP) line shape model in a multispectrum fitting. In total, the data included measurements recorded at temperatures between 125 K and 296 K and at pressures between 4 and 760 Torr. New, sub-Doppler, frequency comb-referenced measurements of the positions of multiple underlying hot band lines have also been made. These underlying lines significantly affect the Pe(11) line profile at temperatures above 240 K and poorly known frequencies previously introduced errors into the line shape analyses. The behavior of the HTP model was compared to the quadratic speed dependent Voigt profile (QSDVP) expressed in the frequency and time domains. A parameter uncertainty analysis was carried out using a Monte Carlo method based on the estimated pressure, transmittance and frequency measurement errors. From the analyses, the Pe(11) line strength was estimated to be 1.2014(50) ×10-20 in cmmolecule-1 units at 296 K with the standard deviation in parenthesis. For analyzing these data, we found that a reduced form of the HTP, equivalent to the QSDVP, was most appropriate because the additional parameters included in the full HTP were not well determined. As a supplement to this work, expressions for analytic derivatives and a lineshape fitting code written in Matlab for the HTP are available.

  3. Removing damped sinusoidal vibrations in adaptive optics systems using a DFT-based estimation method

    NASA Astrophysics Data System (ADS)

    Kania, Dariusz

    2017-06-01

    The problem of a vibrations rejection in adaptive optics systems is still present in publications. These undesirable signals emerge because of shaking the system structure, the tracking process, etc., and they usually are damped sinusoidal signals. There are some mechanical solutions to reduce the signals but they are not very effective. One of software solutions are very popular adaptive methods. An AVC (Adaptive Vibration Cancellation) method has been presented and developed in recent years. The method is based on the estimation of three vibrations parameters and values of frequency, amplitude and phase are essential to produce and adjust a proper signal to reduce or eliminate vibrations signals. This paper presents a fast (below 10 ms) and accurate estimation method of frequency, amplitude and phase of a multifrequency signal that can be used in the AVC method to increase the AO system performance. The method accuracy depends on several parameters: CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, THD, b - number of A/D converter bits in a real time system, γ - the damping ratio of the tested signal, φ - the phase of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value of systematic error for γ = 0.1%, CiR = 1.1 and N = 32 is approximately 10^-4 Hz/Hz. This paper focuses on systematic errors of and effect of the signal phase and values of γ on the results.

  4. Reprocessing the GRACE-derived gravity field time series based on data-driven method for ocean tide alias error mitigation

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Sneeuw, Nico; Jiang, Weiping

    2017-04-01

    GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.

  5. Reduction in specimen labeling errors after implementation of a positive patient identification system in phlebotomy.

    PubMed

    Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F

    2010-06-01

    Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.

  6. Motion prediction in MRI-guided radiotherapy based on interleaved orthogonal cine-MRI

    NASA Astrophysics Data System (ADS)

    Seregni, M.; Paganelli, C.; Lee, D.; Greer, P. B.; Baroni, G.; Keall, P. J.; Riboldi, M.

    2016-01-01

    In-room cine-MRI guidance can provide non-invasive target localization during radiotherapy treatment. However, in order to cope with finite imaging frequency and system latencies between target localization and dose delivery, tumour motion prediction is required. This work proposes a framework for motion prediction dedicated to cine-MRI guidance, aiming at quantifying the geometric uncertainties introduced by this process for both tumour tracking and beam gating. The tumour position, identified through scale invariant features detected in cine-MRI slices, is estimated at high-frequency (25 Hz) using three independent predictors, one for each anatomical coordinate. Linear extrapolation, auto-regressive and support vector machine algorithms are compared against systems that use no prediction or surrogate-based motion estimation. Geometric uncertainties are reported as a function of image acquisition period and system latency. Average results show that the tracking error RMS can be decreased down to a [0.2; 1.2] mm range, for acquisition periods between 250 and 750 ms and system latencies between 50 and 300 ms. Except for the linear extrapolator, tracking and gating prediction errors were, on average, lower than those measured for surrogate-based motion estimation. This finding suggests that cine-MRI guidance, combined with appropriate prediction algorithms, could relevantly decrease geometric uncertainties in motion compensated treatments.

  7. Evaluation of the accuracy of brain optical properties estimation at different ages using the frequency-domain multi-distance method

    NASA Astrophysics Data System (ADS)

    Dehaes, Mathieu; Grant, P. Ellen; Sliva, Danielle D.; Roche-Labarbe, Nadège; Pienaar, Rudolph; Boas, David A.; Franceschini, Maria Angela; Selb, Juliette

    2011-03-01

    NIRS is safe, non-invasive and offers the possibility to record local hemodynamic parameters at the bedside, avoiding the transportation of neonates and critically ill patients. In this work, we evaluate the accuracy of the frequency-domain multi-distance (FD-MD) method to retrieve brain optical properties from neonate to adult. Realistic measurements are simulated using a 3D Monte Carlo modeling of light propagation. Height different ages were investigated: a term newborn of 38 weeks gestational age, two infants of 6 and 12 months of age, a toddler of 2 year (yr.) old, two children of 5 and 10 years of age, a teenager of 14 yr. old, and an adult. Measurements are generated at multiple distances on the right parietal area of head models and fitted to a homogeneous FD-MD model to estimate the brain optical properties. In the newborn, infants, toddler and 5 yr. old child models, the error was dominated by the head curvature, while the superficial layer in the 10 yr. old child, teenager and adult heads. The influence of the CSF is also evaluated. In this case, absorption coefficients suffer from an additional error. In all cases, measurements at 5 mm provided worse estimation because of the diffusion approximation.

  8. Monitoring of deep brain temperature in infants using multi-frequency microwave radiometry and thermal modelling.

    PubMed

    Han, J W; Van Leeuwen, G M; Mizushina, S; Van de Kamer, J B; Maruyama, K; Sugiura, T; Azzopardi, D V; Edwards, A D

    2001-07-01

    In this study we present a design for a multi-frequency microwave radiometer aimed at prolonged monitoring of deep brain temperature in newborn infants and suitable for use during hypothermic neural rescue therapy. We identify appropriate hardware to measure brightness temperature and evaluate the accuracy of the measurements. We describe a method to estimate the tissue temperature distribution from measured brightness temperatures which uses the results of numerical simulations of the tissue temperature as well as the propagation of the microwaves in a realistic detailed three-dimensional infant head model. The temperature retrieval method is then used to evaluate how the statistical fluctuations in the measured brightness temperatures limit the confidence interval for the estimated temperature: for an 18 degrees C temperature differential between cooled surface and deep brain we found a standard error in the estimated central brain temperature of 0.75 degrees C. Evaluation of the systematic errors arising from inaccuracies in model parameters showed that realistic deviations in tissue parameters have little impact compared to uncertainty in the thickness of the bolus between the receiving antenna and the infant's head or in the skull thickness. This highlights the need to pay particular attention to these latter parameters in future practical implementation of the technique.

  9. Effects of electrocardiography contamination and comparison of ECG removal methods on upper trapezius electromyography recordings.

    PubMed

    Marker, Ryan J; Maluf, Katrina S

    2014-12-01

    Electromyography (EMG) recordings from the trapezius are often contaminated by the electrocardiography (ECG) signal, making it difficult to distinguish low-level muscle activity from muscular rest. This study investigates the influence of ECG contamination on EMG amplitude and frequency estimations in the upper trapezius during muscular rest and low-level contractions. A new method of ECG contamination removal, filtered template subtraction (FTS), is described and compared to 30 Hz high-pass filter (HPF) and averaged template subtraction (ATS) methods. FTS creates a unique template of each ECG artifact using a low-pass filtered copy of the contaminated signal, which is subtracted from contaminated periods in the original signal. ECG contamination results in an over-estimation of EMG amplitude during rest in the upper trapezius, with negligible effects on amplitude and frequency estimations during low-intensity isometric contractions. FTS and HPF successfully removed ECG contamination from periods of muscular rest, yet introduced errors during muscle contraction. Conversely, ATS failed to fully remove ECG contamination during muscular rest, yet did not introduce errors during muscle contraction. The relative advantages and disadvantages of different ECG contamination removal methods should be considered in the context of the specific motor tasks that require analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. High-Resolution Time-Frequency Spectrum-Based Lung Function Test from a Smartphone Microphone

    PubMed Central

    Thap, Tharoeun; Chung, Heewon; Jeong, Changwon; Hwang, Ki-Eun; Kim, Hak-Ryul; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    In this paper, a smartphone-based lung function test, developed to estimate lung function parameters using a high-resolution time-frequency spectrum from a smartphone built-in microphone is presented. A method of estimation of the forced expiratory volume in 1 s divided by forced vital capacity (FEV1/FVC) based on the variable frequency complex demodulation method (VFCDM) is first proposed. We evaluated our proposed method on 26 subjects, including 13 healthy subjects and 13 chronic obstructive pulmonary disease (COPD) patients, by comparing with the parameters clinically obtained from pulmonary function tests (PFTs). For the healthy subjects, we found that an absolute error (AE) and a root mean squared error (RMSE) of the FEV1/FVC ratio were 4.49% ± 3.38% and 5.54%, respectively. For the COPD patients, we found that AE and RMSE from COPD patients were 10.30% ± 10.59% and 14.48%, respectively. For both groups, we compared the results using the continuous wavelet transform (CWT) and short-time Fourier transform (STFT), and found that VFCDM was superior to CWT and STFT. Further, to estimate other parameters, including forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), and peak expiratory flow (PEF), regression analysis was conducted to establish a linear transformation. However, the parameters FVC, FEV1, and PEF had correlation factor r values of 0.323, 0.275, and −0.257, respectively, while FEV1/FVC had an r value of 0.814. The results obtained suggest that only the FEV1/FVC ratio can be accurately estimated from a smartphone built-in microphone. The other parameters, including FVC, FEV1, and PEF, were subjective and dependent on the subject’s familiarization with the test and performance of forced exhalation toward the microphone. PMID:27548164

  11. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  12. Comparative Analysis of Methods of Evaluating the Lower Ionosphere Parameters by Tweek Atmospherics

    NASA Astrophysics Data System (ADS)

    Krivonos, A. P.; Shvets, A. V.

    2016-12-01

    Purpose: A comparative analysis of the phase and frequency methods for determining the Earth-ionosphere effective waveguide heights for the basic and higher types of normal waves (modes) and distance to the source of radiation - lightning - has been made by analyzing pulse signals in the ELF-VLF range - tweek-atmospherics (tweeks). Design/methodology/approach: To test the methods in computer simulations, the tweeks waveforms were synthesized for the Earth-ionosphere waveguide model with the exponential conductivity profile of the lower ionosphere. The calculations were made for a 20-40 dB signal/noise ratio. Findings: The error of the frequency method of determining the effective height of the waveguide for different waveguide modes was less than 0.5 %. The error of the phase method for determining the effective height of the waveguide was less than 0.8 %. Errors in determining the distance to the lightning was less than 1 % for the phase method, and less than 5 % for the frequency method for the source ranges 1000-3000 km. Conclusions: The analysis results have showed the accuracy of the frequency and phase methods being practically the same within distances of 1000-3000 km. For distances less than 1000 km, the phase method shows a more accurate evaluation of the range, so the combination of the two methods can be used to improve estimating the tweek’s propagation path parameters.

  13. Ringing Artefact Reduction By An Efficient Likelihood Improvement Method

    NASA Astrophysics Data System (ADS)

    Fuderer, Miha

    1989-10-01

    In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..

  14. The modal surface interpolation method for damage localization

    NASA Astrophysics Data System (ADS)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  15. An Autonomous Satellite Time Synchronization System Using Remotely Disciplined VC-OCXOs

    PubMed Central

    Gu, Xiaobo; Chang, Qing; Glennon, Eamonn P.; Xu, Baoda; Dempseter, Andrew G.; Wang, Dun; Wu, Jiapeng

    2015-01-01

    An autonomous remote clock control system is proposed to provide time synchronization and frequency syntonization for satellite to satellite or ground to satellite time transfer, with the system comprising on-board voltage controlled oven controlled crystal oscillators (VC-OCXOs) that are disciplined to a remote master atomic clock or oscillator. The synchronization loop aims to provide autonomous operation over extended periods, be widely applicable to a variety of scenarios and robust. A new architecture comprising the use of frequency division duplex (FDD), synchronous time division (STDD) duplex and code division multiple access (CDMA) with a centralized topology is employed. This new design utilizes dual one-way ranging methods to precisely measure the clock error, adopts least square (LS) methods to predict the clock error and employs a third-order phase lock loop (PLL) to generate the voltage control signal. A general functional model for this system is proposed and the error sources and delays that affect the time synchronization are discussed. Related algorithms for estimating and correcting these errors are also proposed. The performance of the proposed system is simulated and guidance for selecting the clock is provided. PMID:26213929

  16. Associations between communication climate and the frequency of medical error reporting among pharmacists within an inpatient setting.

    PubMed

    Patterson, Mark E; Pace, Heather A; Fincham, Jack E

    2013-09-01

    Although error-reporting systems enable hospitals to accurately track safety climate through the identification of adverse events, these systems may be underused within a work climate of poor communication. The objective of this analysis is to identify the extent to which perceived communication climate among hospital pharmacists impacts medical error reporting rates. This cross-sectional study used survey responses from more than 5000 pharmacists responding to the 2010 Hospital Survey on Patient Safety Culture (HSOPSC). Two composite scores were constructed for "communication openness" and "feedback and about error," respectively. Error reporting frequency was defined from the survey question, "In the past 12 months, how many event reports have you filled out and submitted?" Multivariable logistic regressions were used to estimate the likelihood of medical error reporting conditional upon communication openness or feedback levels, controlling for pharmacist years of experience, hospital geographic region, and ownership status. Pharmacists with higher communication openness scores compared with lower scores were 40% more likely to have filed or submitted a medical error report in the past 12 months (OR, 1.4; 95% CI, 1.1-1.7; P = 0.004). In contrast, pharmacists with higher communication feedback scores were not any more likely than those with lower scores to have filed or submitted a medical report in the past 12 months (OR, 1.0; 95% CI, 0.8-1.3; P = 0.97). Hospital work climates that encourage pharmacists to freely communicate about problems related to patient safety is conducive to medical error reporting. The presence of feedback infrastructures about error may not be sufficient to induce error-reporting behavior.

  17. Use of the Method of Triads in the Validation of Sodium and Potassium Intake in the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil)

    PubMed Central

    Pereira, Taísa Sabrina Silva; Cade, Nágela Valadão; Mill, José Geraldo; Sichieri, Rosely; Molina, Maria del Carmen Bisi

    2016-01-01

    Introduction Biomarkers are a good choice to be used in the validation of food frequency questionnaire due to the independence of their random errors. Objective To assess the validity of the potassium and sodium intake estimated using the Food Frequency Questionnaire ELSA-Brasil. Subjects/Methods A subsample of participants in the ELSA-Brasil cohort was included in this study in 2009. Sodium and potassium intake were estimated using three methods: Semi-quantitative food frequency questionnaire, 12-hour nocturnal urinary excretion and three 24-hour food records. Correlation coefficients were calculated between the methods, and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficient were estimated using bootstrap sampling. Exact and adjacent agreement and disagreement of the estimated sodium and potassium intake quintiles were compared among three methods. Results The sample consisted of 246 participants, aged 53±8 years, 52% of women. Validity coefficient for sodium were considered weak (рfood frequency questionnaire actual intake = 0.37 and рbiomarker actual intake = 0.21) and moderate (рfood records actual intake 0.56). The validity coefficient were higher for potassium (рfood frequency questionnaire actual intake = 0.60; рbiomarker actual intake = 0.42; рfood records actual intake = 0.79). Conclusions: The Food Frequency Questionnaire ELSA-Brasil showed good validity in estimating potassium intake in epidemiological studies. For sodium validity was weak, likely due to the non-quantification of the added salt to prepared food. PMID:28030625

  18. Use of the Method of Triads in the Validation of Sodium and Potassium Intake in the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil).

    PubMed

    Pereira, Taísa Sabrina Silva; Cade, Nágela Valadão; Mill, José Geraldo; Sichieri, Rosely; Molina, Maria Del Carmen Bisi

    2016-01-01

    Biomarkers are a good choice to be used in the validation of food frequency questionnaire due to the independence of their random errors. To assess the validity of the potassium and sodium intake estimated using the Food Frequency Questionnaire ELSA-Brasil. A subsample of participants in the ELSA-Brasil cohort was included in this study in 2009. Sodium and potassium intake were estimated using three methods: Semi-quantitative food frequency questionnaire, 12-hour nocturnal urinary excretion and three 24-hour food records. Correlation coefficients were calculated between the methods, and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficient were estimated using bootstrap sampling. Exact and adjacent agreement and disagreement of the estimated sodium and potassium intake quintiles were compared among three methods. The sample consisted of 246 participants, aged 53±8 years, 52% of women. Validity coefficient for sodium were considered weak (рfood frequency questionnaire actual intake = 0.37 and рbiomarker actual intake = 0.21) and moderate (рfood records actual intake 0.56). The validity coefficient were higher for potassium (рfood frequency questionnaire actual intake = 0.60; рbiomarker actual intake = 0.42; рfood records actual intake = 0.79). Conclusions: The Food Frequency Questionnaire ELSA-Brasil showed good validity in estimating potassium intake in epidemiological studies. For sodium validity was weak, likely due to the non-quantification of the added salt to prepared food.

  19. Direct comparisons of Illumina vs. Roche 454 sequencing technologies on the same microbial community DNA sample.

    PubMed

    Luo, Chengwei; Tsementzi, Despina; Kyrpides, Nikos; Read, Timothy; Konstantinidis, Konstantinos T

    2012-01-01

    Next-generation sequencing (NGS) is commonly used in metagenomic studies of complex microbial communities but whether or not different NGS platforms recover the same diversity from a sample and their assembled sequences are of comparable quality remain unclear. We compared the two most frequently used platforms, the Roche 454 FLX Titanium and the Illumina Genome Analyzer (GA) II, on the same DNA sample obtained from a complex freshwater planktonic community. Despite the substantial differences in read length and sequencing protocols, the platforms provided a comparable view of the community sampled. For instance, derived assemblies overlapped in ~90% of their total sequences and in situ abundances of genes and genotypes (estimated based on sequence coverage) correlated highly between the two platforms (R(2)>0.9). Evaluation of base-call error, frameshift frequency, and contig length suggested that Illumina offered equivalent, if not better, assemblies than Roche 454. The results from metagenomic samples were further validated against DNA samples of eighteen isolate genomes, which showed a range of genome sizes and G+C% content. We also provide quantitative estimates of the errors in gene and contig sequences assembled from datasets characterized by different levels of complexity and G+C% content. For instance, we noted that homopolymer-associated, single-base errors affected ~1% of the protein sequences recovered in Illumina contigs of 10× coverage and 50% G+C; this frequency increased to ~3% when non-homopolymer errors were also considered. Collectively, our results should serve as a useful practical guide for choosing proper sampling strategies and data possessing protocols for future metagenomic studies.

  20. Dealing with dietary measurement error in nutritional cohort studies.

    PubMed

    Freedman, Laurence S; Schatzkin, Arthur; Midthune, Douglas; Kipnis, Victor

    2011-07-20

    Dietary measurement error creates serious challenges to reliably discovering new diet-disease associations in nutritional cohort studies. Such error causes substantial underestimation of relative risks and reduction of statistical power for detecting associations. On the basis of data from the Observing Protein and Energy Nutrition Study, we recommend the following approaches to deal with these problems. Regarding data analysis of cohort studies using food-frequency questionnaires, we recommend 1) using energy adjustment for relative risk estimation; 2) reporting estimates adjusted for measurement error along with the usual relative risk estimates, whenever possible (this requires data from a relevant, preferably internal, validation study in which participants report intakes using both the main instrument and a more detailed reference instrument such as a 24-hour recall or multiple-day food record); 3) performing statistical adjustment of relative risks, based on such validation data, if they exist, using univariate (only for energy-adjusted intakes such as densities or residuals) or multivariate regression calibration. We note that whereas unadjusted relative risk estimates are biased toward the null value, statistical significance tests of unadjusted relative risk estimates are approximately valid. Regarding study design, we recommend increasing the sample size to remedy loss of power; however, it is important to understand that this will often be an incomplete solution because the attenuated signal may be too small to distinguish from unmeasured confounding in the model relating disease to reported intake. Future work should be devoted to alleviating the problem of signal attenuation, possibly through the use of improved self-report instruments or by combining dietary biomarkers with self-report instruments.

  1. Quantifying and correcting motion artifacts in MRI

    NASA Astrophysics Data System (ADS)

    Bones, Philip J.; Maclaren, Julian R.; Millane, Rick P.; Watts, Richard

    2006-08-01

    Patient motion during magnetic resonance imaging (MRI) can produce significant artifacts in a reconstructed image. Since measurements are made in the spatial frequency domain ('k-space'), rigid-body translational motion results in phase errors in the data samples while rotation causes location errors. A method is presented to detect and correct these errors via a modified sampling strategy, thereby achieving more accurate image reconstruction. The strategy involves sampling vertical and horizontal strips alternately in k-space and employs phase correlation within the overlapping segments to estimate translational motion. An extension, also based on correlation, is employed to estimate rotational motion. Results from simulations with computer-generated phantoms suggest that the algorithm is robust up to realistic noise levels. The work is being extended to physical phantoms. Provided that a reference image is available and the object is of limited extent, it is shown that a measure related to the amount of energy outside the support can be used to objectively compare the severity of motion-induced artifacts.

  2. What do the experts know? Calibration, precision, and the wisdom of crowds among forensic handwriting experts.

    PubMed

    Martire, Kristy A; Growns, Bethany; Navarro, Danielle J

    2018-04-17

    Forensic handwriting examiners currently testify to the origin of questioned handwriting for legal purposes. However, forensic scientists are increasingly being encouraged to assign probabilities to their observations in the form of a likelihood ratio. This study is the first to examine whether handwriting experts are able to estimate the frequency of US handwriting features more accurately than novices. The results indicate that the absolute error for experts was lower than novices, but the size of the effect is modest, and the overall error rate even for experts is large enough as to raise questions about whether their estimates can be sufficiently trustworthy for presentation in courts. When errors are separated into effects caused by miscalibration and those caused by imprecision, we find systematic differences between individuals. Finally, we consider several ways of aggregating predictions from multiple experts, suggesting that quite substantial improvements in expert predictions are possible when a suitable aggregation method is used.

  3. Use of global positioning system measurements to determine geocentric coordinates and variations in Earth orientation

    NASA Technical Reports Server (NTRS)

    Malla, R. P.; Wu, S.-C.; Lichten, S. M.

    1993-01-01

    Geocentric tracking station coordinates and short-period Earth-orientation variations can be measured with Global Positioning System (GPS) measurements. Unless calibrated, geocentric coordinate errors and changes in Earth orientation can lead to significant deep-space tracking errors. Ground-based GPS estimates of daily and subdaily changes in Earth orientation presently show centimeter-level precision. Comparison between GPS-estimated Earth-rotation variations, which are the differences between Universal Time 1 and Universal Coordinated Time (UT1-UTC), and those calculated from ocean tide models suggests that observed subdaily variations in Earth rotation are dominated by oceanic tidal effects. Preliminary GPS estimates for the geocenter location (from a 3-week experiment) agree with independent satellite laser-ranging estimates to better than 10 cm. Covariance analysis predicts that temporal resolution of GPS estimates for Earth orientation and geocenter improves significantly when data collected from low Earth-orbiting satellites as well as from ground sites are combined. The low Earth GPS tracking data enhance the accuracy and resolution for measuring high-frequency global geodynamical signals over time scales of less than 1 day.

  4. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  5. Certainty Equivalence M-MRAC for Systems with Unmatched Uncertainties

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    The paper presents a certainty equivalence state feedback indirect adaptive control design method for the systems of any relative degree with unmatched uncertainties. The approach is based on the parameter identification (estimation) model, which is completely separated from the control design and is capable of producing parameter estimates as fast as the computing power allows without generating high frequency oscillations. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters.

  6. Methods for estimating selected low-flow frequency statistics and harmonic mean flows for streams in Iowa

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.

    2017-01-01

    A statewide study was conducted to develop regression equations for estimating six selected low-flow frequency statistics and harmonic mean flows for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include: the annual 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years, the annual 30-day mean low flow for a recurrence interval of 5 years, and the seasonal (October 1 through December 31) 1- and 7-day mean low flows for a recurrence interval of 10 years. Estimation equations also were developed for the harmonic-mean-flow statistic. Estimates of these seven selected statistics are provided for 208 U.S. Geological Survey continuous-record streamgages using data through September 30, 2006. The study area comprises streamgages located within Iowa and 50 miles beyond the State's borders. Because trend analyses indicated statistically significant positive trends when considering the entire period of record for the majority of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. The median number of years of record used to compute each of these seven selected statistics was 35. Geographic information system software was used to measure 54 selected basin characteristics for each streamgage. Following the removal of two streamgages from the initial data set, data collected for 206 streamgages were compiled to investigate three approaches for regionalization of the seven selected statistics. Regionalization, a process using statistical regression analysis, provides a relation for efficiently transferring information from a group of streamgages in a region to ungaged sites in the region. The three regionalization approaches tested included statewide, regional, and region-of-influence regressions. For the regional regression, the study area was divided into three low-flow regions on the basis of hydrologic characteristics, landform regions, and soil regions. A comparison of root mean square errors and average standard errors of prediction for the statewide, regional, and region-of-influence regressions determined that the regional regression provided the best estimates of the seven selected statistics at ungaged sites in Iowa. Because a significant number of streams in Iowa reach zero flow as their minimum flow during low-flow years, four different types of regression analyses were used: left-censored, logistic, generalized-least-squares, and weighted-least-squares regression. A total of 192 streamgages were included in the development of 27 regression equations for the three low-flow regions. For the northeast and northwest regions, a censoring threshold was used to develop 12 left-censored regression equations to estimate the 6 low-flow frequency statistics for each region. For the southern region a total of 12 regression equations were developed; 6 logistic regression equations were developed to estimate the probability of zero flow for the 6 low-flow frequency statistics and 6 generalized least-squares regression equations were developed to estimate the 6 low-flow frequency statistics, if nonzero flow is estimated first by use of the logistic equations. A weighted-least-squares regression equation was developed for each region to estimate the harmonic-mean-flow statistic. Average standard errors of estimate for the left-censored equations for the northeast region range from 64.7 to 88.1 percent and for the northwest region range from 85.8 to 111.8 percent. Misclassification percentages for the logistic equations for the southern region range from 5.6 to 14.0 percent. Average standard errors of prediction for generalized least-squares equations for the southern region range from 71.7 to 98.9 percent and pseudo coefficients of determination for the generalized-least-squares equations range from 87.7 to 91.8 percent. Average standard errors of prediction for weighted-least-squares equations developed for estimating the harmonic-mean-flow statistic for each of the three regions range from 66.4 to 80.4 percent. The regression equations are applicable only to stream sites in Iowa with low flows not significantly affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. If the equations are used at ungaged sites on regulated streams, or on streams affected by water-supply and agricultural withdrawals, then the estimates will need to be adjusted by the amount of regulation or withdrawal to estimate the actual flow conditions if that is of interest. Caution is advised when applying the equations for basins with characteristics near the applicable limits of the equations and for basins located in karst topography. A test of two drainage-area ratio methods using 31 pairs of streamgages, for the annual 7-day mean low-flow statistic for a recurrence interval of 10 years, indicates a weighted drainage-area ratio method provides better estimates than regional regression equations for an ungaged site on a gaged stream in Iowa when the drainage-area ratio is between 0.5 and 1.4. These regression equations will be implemented within the U.S. Geological Survey StreamStats web-based geographic-information-system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the seven selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these seven selected statistics are provided for the streamgage.

  7. Analysis of Wind Tunnel Lateral Oscillatory Data of the F-16XL Aircraft

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav; Murphy, Patrick C.; Szyba, Nathan M.

    2004-01-01

    Static and dynamic wind tunnel tests were performed on an 18% scale model of the F-16XL aircraft. These tests were performed over a wide range of angles of attack and sideslip with oscillation amplitudes from 5 deg. to 30 deg. and reduced frequencies from 0.073 to 0.269. Harmonic analysis was used to estimate Fourier coefficients and in-phase and out-of-phase components. For frequency dependent data from rolling oscillations, a two-step regression method was used to obtain unsteady models (indicial functions), and derivatives due to sideslip angle, roll rate and yaw rate from in-phase and out-of-phase components. Frequency dependence was found for angles of attack between 20 deg. and 50 deg. Reduced values of coefficient of determination and increased values of fit error were found for angles of attack between 35 deg. and 45 deg. An attempt to estimate model parameters from yaw oscillations failed, probably due to the low number of test cases at different frequencies.

  8. Local regression type methods applied to the study of geophysics and high frequency financial data

    NASA Astrophysics Data System (ADS)

    Mariani, M. C.; Basu, K.

    2014-09-01

    In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.

  9. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  10. Field‐readable alphanumeric flags are valuable markers for shorebirds: use of double‐marking to identify cases of misidentification

    USGS Publications Warehouse

    Roche, Erin A.; Dovichin, Colin M.; Arnold, Todd W.

    2014-01-01

    Implicit assumptions for most mark-recapture studies are that individuals do not lose their markers and all observed markers are correctly recorded. If these assumptions are violated, e.g., due to loss or extreme wear of markers, estimates of population size and vital rates will be biased. Double-marking experiments have been widely used to estimate rates of marker loss and adjust for associated bias, and we extended this approach to estimate rates of recording errors. We double-marked 309 Piping Plovers (Charadrius melodus) with unique combinations of color bands and alphanumeric flags and used multi-state mark recapture models to estimate the frequency with which plovers were misidentified. Observers were twice as likely to read and report an invalid color-band combination (2.4% of the time) as an invalid alphanumeric code (1.0%). Observers failed to read matching band combinations or alphanumeric flag codes 4.5% of the time. Unlike previous band resighting studies, use of two resightable markers allowed us to identify when resighting errors resulted in reports of combinations or codes that were valid, but still incorrect; our results suggest this may be a largely unappreciated problem in mark-resight studies. Field-readable alphanumeric flags offer a promising auxiliary marker for identifying and potentially adjusting for false-positive resighting errors that may otherwise bias demographic estimates.

  11. An improved empirical model for diversity gain on Earth-space propagation paths

    NASA Technical Reports Server (NTRS)

    Hodge, D. B.

    1981-01-01

    An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.

  12. Testing Boundary Conditions for the Conjunction Fallacy: Effects of Response Mode, Conceptual Focus, and Problem Type

    ERIC Educational Resources Information Center

    Wedell, Douglas H.; Moro, Rodrigo

    2008-01-01

    Two experiments used within-subject designs to examine how conjunction errors depend on the use of (1) choice versus estimation tasks, (2) probability versus frequency language, and (3) conjunctions of two likely events versus conjunctions of likely and unlikely events. All problems included a three-option format verified to minimize…

  13. The Effects of Spatial Diversity and Imperfect Channel Estimation on Wideband MC-DS-CDMA and MC-CDMA

    DTIC Science & Technology

    2009-10-01

    In our previous work, we compared the theoretical bit error rates of multi-carrier direct sequence code division multiple access (MC- DS - CDMA ) and...consider only those cases where MC- CDMA has higher frequency diversity than MC- DS - CDMA . Since increases in diversity yield diminishing gains, we conclude

  14. Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System

    NASA Technical Reports Server (NTRS)

    Pfenninger, W. Matthew; Papen, George C.

    1992-01-01

    Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.

  15. Bayesian operational modal analysis of Jiangyin Yangtze River Bridge

    NASA Astrophysics Data System (ADS)

    Brownjohn, James Mark William; Au, Siu-Kui; Zhu, Yichen; Sun, Zhen; Li, Binbin; Bassitt, James; Hudson, Emma; Sun, Hongbin

    2018-09-01

    Vibration testing of long span bridges is becoming a commissioning requirement, yet such exercises represent the extreme of experimental capability, with challenges for instrumentation (due to frequency range, resolution and km-order separation of sensor) and system identification (because of the extreme low frequencies). The challenge with instrumentation for modal analysis is managing synchronous data acquisition from sensors distributed widely apart inside and outside the structure. The ideal solution is precisely synchronised autonomous recorders that do not need cables, GPS or wireless communication. The challenge with system identification is to maximise the reliability of modal parameters through experimental design and subsequently to identify the parameters in terms of mean values and standard errors. The challenge is particularly severe for modes with low frequency and damping typical of long span bridges. One solution is to apply 'third generation' operational modal analysis procedures using Bayesian approaches in both the planning and analysis stages. The paper presents an exercise on the Jiangyin Yangtze River Bridge, a suspension bridge with a 1385 m main span. The exercise comprised planning of a test campaign to optimise the reliability of operational modal analysis, the deployment of a set of independent data acquisition units synchronised using precision oven controlled crystal oscillators and the subsequent identification of a set of modal parameters in terms of mean and variance errors. Although the bridge has had structural health monitoring technology installed since it was completed, this was the first full modal survey, aimed at identifying important features of the modal behaviour rather than providing fine resolution of mode shapes through the whole structure. Therefore, measurements were made in only the (south) tower, while torsional behaviour was identified by a single measurement using a pair of recorders across the carriageway. The modal survey revealed a first lateral symmetric mode with natural frequency 0.0536 Hz with standard error ±3.6% and damping ratio 4.4% with standard error ±88%. First vertical mode is antisymmetric with frequency 0.11 Hz ± 1.2% and damping ratio 4.9% ± 41%. A significant and novel element of the exercise was planning of the measurement setups and their necessary duration linked to prior estimation of the precision of the frequency and damping estimates. The second novelty is the use of the multi-sensor precision synchronised acquisition without external time reference on a structure of this scale. The challenges of ambient vibration testing and modal identification in a complex environment are addressed leveraging on advances in practical implementation and scientific understanding of the problem.

  16. Flood-Frequency Estimates for Streams on Kaua`i, O`ahu, Moloka`i, Maui, and Hawai`i, State of Hawai`i

    USGS Publications Warehouse

    Oki, Delwyn S.; Rosa, Sarah N.; Yeung, Chiu W.

    2010-01-01

    This study provides an updated analysis of the magnitude and frequency of peak stream discharges in Hawai`i. Annual peak-discharge data collected by the U.S. Geological Survey during and before water year 2008 (ending September 30, 2008) at stream-gaging stations were analyzed. The existing generalized-skew value for the State of Hawai`i was retained, although three methods were used to evaluate whether an update was needed. Regional regression equations were developed for peak discharges with 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals for unregulated streams (those for which peak discharges are not affected to a large extent by upstream reservoirs, dams, diversions, or other structures) in areas with less than 20 percent combined medium- and high-intensity development on Kaua`i, O`ahu, Moloka`i, Maui, and Hawai`i. The generalized-least-squares (GLS) regression equations relate peak stream discharge to quantified basin characteristics (for example, drainage-basin area and mean annual rainfall) that were determined using geographic information system (GIS) methods. Each of the islands of Kaua`i,O`ahu, Moloka`i, Maui, and Hawai`i was divided into two regions, generally corresponding to a wet region and a dry region. Unique peak-discharge regression equations were developed for each region. The regression equations developed for this study have standard errors of prediction ranging from 16 to 620 percent. Standard errors of prediction are greatest for regression equations developed for leeward Moloka`i and southern Hawai`i. In general, estimated 100-year peak discharges from this study are lower than those from previous studies, which may reflect the longer periods of record used in this study. Each regression equation is valid within the range of values of the explanatory variables used to develop the equation. The regression equations were developed using peak-discharge data from streams that are mainly unregulated, and they should not be used to estimate peak discharges in regulated streams. Use of a regression equation beyond its limits will produce peak-discharge estimates with unknown error and should therefore be avoided. Improved estimates of the magnitude and frequency of peak discharges in Hawai`i will require continued operation of existing stream-gaging stations and operation of additional gaging stations for areas such as Moloka`i and Hawai`i, where limited stream-gaging data are available.

  17. Neural Correlates of User-initiated Motor Success and Failure - A Brain-Computer Interface Perspective.

    PubMed

    Yazmir, Boris; Reiner, Miriam

    2018-05-15

    Any motor action is, by nature, potentially accompanied by human errors. In order to facilitate development of error-tailored Brain-Computer Interface (BCI) correction systems, we focused on internal, human-initiated errors, and investigated EEG correlates of user outcome successes and errors during a continuous 3D virtual tennis game against a computer player. We used a multisensory, 3D, highly immersive environment. Missing and repelling the tennis ball were considered, as 'error' (miss) and 'success' (repel). Unlike most previous studies, where the environment "encouraged" the participant to perform a mistake, here errors happened naturally, resulting from motor-perceptual-cognitive processes of incorrect estimation of the ball kinematics, and can be regarded as user internal, self-initiated errors. Results show distinct and well-defined Event-Related Potentials (ERPs), embedded in the ongoing EEG, that differ across conditions by waveforms, scalp signal distribution maps, source estimation results (sLORETA) and time-frequency patterns, establishing a series of typical features that allow valid discrimination between user internal outcome success and error. The significant delay in latency between positive peaks of error- and success-related ERPs, suggests a cross-talk between top-down and bottom-up processing, represented by an outcome recognition process, in the context of the game world. Success-related ERPs had a central scalp distribution, while error-related ERPs were centro-parietal. The unique characteristics and sharp differences between EEG correlates of error/success provide the crucial components for an improved BCI system. The features of the EEG waveform can be used to detect user action outcome, to be fed into the BCI correction system. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  19. Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2003-01-01

    Regional equations for estimating 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood-peak discharges at ungaged sites on rural, unregulated streams in Ohio were developed by means of ordinary and generalized least-squares (GLS) regression techniques. One-variable, simple equations and three-variable, full-model equations were developed on the basis of selected basin characteristics and flood-frequency estimates determined for 305 streamflow-gaging stations in Ohio and adjacent states. The average standard errors of prediction ranged from about 39 to 49 percent for the simple equations, and from about 34 to 41 percent for the full-model equations. Flood-frequency estimates determined by means of log-Pearson Type III analyses are reported along with weighted flood-frequency estimates, computed as a function of the log-Pearson Type III estimates and the regression estimates. Values of explanatory variables used in the regression models were determined from digital spatial data sets by means of a geographic information system (GIS), with the exception of drainage area, which was determined by digitizing the area within basin boundaries manually delineated on topographic maps. Use of GIS-based explanatory variables represents a major departure in methodology from that described in previous reports on estimating flood-frequency characteristics of Ohio streams. Examples are presented illustrating application of the regression equations to ungaged sites on ungaged and gaged streams. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site on the same stream. A region-of-influence method, which employs a computer program to estimate flood-frequency characteristics for ungaged sites based on data from gaged sites with similar characteristics, was also tested and compared to the GLS full-model equations. For all recurrence intervals, the GLS full-model equations had superior prediction accuracy relative to the simple equations and therefore are recommended for use.

  20. Analysis of error type and frequency in apraxia of speech among Portuguese speakers.

    PubMed

    Cera, Maysa Luchesi; Minett, Thaís Soares Cianciarullo; Ortiz, Karin Zazo

    2010-01-01

    Most studies characterizing errors in the speech of patients with apraxia involve English language. To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.

  1. Automation of workplace lifting hazard assessment for musculoskeletal injury prevention.

    PubMed

    Spector, June T; Lieblich, Max; Bao, Stephen; McQuade, Kevin; Hughes, Margaret

    2014-01-01

    Existing methods for practically evaluating musculoskeletal exposures such as posture and repetition in workplace settings have limitations. We aimed to automate the estimation of parameters in the revised United States National Institute for Occupational Safety and Health (NIOSH) lifting equation, a standard manual observational tool used to evaluate back injury risk related to lifting in workplace settings, using depth camera (Microsoft Kinect) and skeleton algorithm technology. A large dataset (approximately 22,000 frames, derived from six subjects) of simultaneous lifting and other motions recorded in a laboratory setting using the Kinect (Microsoft Corporation, Redmond, Washington, United States) and a standard optical motion capture system (Qualysis, Qualysis Motion Capture Systems, Qualysis AB, Sweden) was assembled. Error-correction regression models were developed to improve the accuracy of NIOSH lifting equation parameters estimated from the Kinect skeleton. Kinect-Qualysis errors were modelled using gradient boosted regression trees with a Huber loss function. Models were trained on data from all but one subject and tested on the excluded subject. Finally, models were tested on three lifting trials performed by subjects not involved in the generation of the model-building dataset. Error-correction appears to produce estimates for NIOSH lifting equation parameters that are more accurate than those derived from the Microsoft Kinect algorithm alone. Our error-correction models substantially decreased the variance of parameter errors. In general, the Kinect underestimated parameters, and modelling reduced this bias, particularly for more biased estimates. Use of the raw Kinect skeleton model tended to result in falsely high safe recommended weight limits of loads, whereas error-corrected models gave more conservative, protective estimates. Our results suggest that it may be possible to produce reasonable estimates of posture and temporal elements of tasks such as task frequency in an automated fashion, although these findings should be confirmed in a larger study. Further work is needed to incorporate force assessments and address workplace feasibility challenges. We anticipate that this approach could ultimately be used to perform large-scale musculoskeletal exposure assessment not only for research but also to provide real-time feedback to workers and employers during work method improvement activities and employee training.

  2. Numerical simulation and analysis for low-frequency rock physics measurements

    NASA Astrophysics Data System (ADS)

    Dong, Chunhui; Tang, Genyang; Wang, Shangxu; He, Yanxiao

    2017-10-01

    In recent years, several experimental methods have been introduced to measure the elastic parameters of rocks in the relatively low-frequency range, such as differential acoustic resonance spectroscopy (DARS) and stress-strain measurement. It is necessary to verify the validity and feasibility of the applied measurement method and to quantify the sources and levels of measurement error. Relying solely on the laboratory measurements, however, we cannot evaluate the complete wavefield variation in the apparatus. Numerical simulations of elastic wave propagation, on the other hand, are used to model the wavefield distribution and physical processes in the measurement systems, and to verify the measurement theory and analyze the measurement results. In this paper we provide a numerical simulation method to investigate the acoustic waveform response of the DARS system and the quasi-static responses of the stress-strain system, both of which use axisymmetric apparatus. We applied this method to parameterize the properties of the rock samples, the sample locations and the sensor (hydrophone and strain gauges) locations and simulate the measurement results, i.e. resonance frequencies and axial and radial strains on the sample surface, from the modeled wavefield following the physical experiments. Rock physical parameters were estimated by inversion or direct processing of these data, and showed a perfect match with the true values, thus verifying the validity of the experimental measurements. Error analysis was also conducted for the DARS system with 18 numerical samples, and the sources and levels of error are discussed. In particular, we propose an inversion method for estimating both density and compressibility of these samples. The modeled results also showed fairly good agreement with the real experiment results, justifying the effectiveness and feasibility of our modeling method.

  3. Rain attenuation statistics over millimeter wave bands in South Korea

    NASA Astrophysics Data System (ADS)

    Shrestha, Sujan; Choi, Dong-You

    2017-01-01

    Rain induced degradations are significant for terrestrial microwave links operating at frequencies higher than 10 GHz. Paper presents analyses done on rain attenuation and rainfall data for three years between 2013 till 2015, in 3.2 km experimental link of 38 GHz and 0.1 km link at 75 GHz. The less link distance is maintained for 75 GHz operating frequency in order to have better recording of propagation effect as such attenuation induced by rain. OTT Parsivel is used for collection of rain rate database which show rain rate of about 50 mm/h and attenuation values of 20.89 and 28.55 dB are obtained at 0.01% of the time for vertical polarization under 38 and 75 GHz respectively. Prediction models, namely, ITU-R P. 530-16, Da Silva Mello, Moupfouma, Abdulrahman, Lin and differential equation approach are analyzed. This studies help to identify most suitable rain attenuation model for higher microwave bands. While applying ITU-R P. 530-16, the relative error margin of about 3%, 38% and 42% along with 80, 70, 61% were obtained in 0.1%, 0.01% and 0.001% of the time for vertical polarization under 38 and 75 GHz respectively. Interestingly, ITU-R P. 530-16 shows relatively closer estimation to measured rain attenuation at 75 GHz with relatively less error probabilities and additionally, Abdulrahman and ITU-R P. 530-16 results in better estimation to the measured rain attenuation at 38 GHz link. The performance of prominent rain attenuation models are judged with different error matrices as recommended by ITU-R P. 311-15. Furthermore, the efficacy of frequency scaling technique of rain attenuation between links distribution are also discussed. This study shall be useful for making good considerations in rain attenuation predictions for terrestrial link operating at higher frequencies.

  4. SPECIAL SESSION: (H21) on Global Precipitation Mission for Hydrology and Hydrometeorology. Sampling-Error Considerations for GPM-Era Rainfall Products

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The proposed Global Precipitation Mission (GPM) builds on the success of the Tropical Rainfall Measuring Mission (TRMM), offering a constellation of microwave-sensor-equipped smaller satellites in addition to a larger, multiply-instrumented "mother" satellite that will include an improved precipitation radar system to which the precipitation estimates of the smaller satellites can be tuned. Coverage by the satellites will be nearly global rather than being confined as TRMM was to lower latitudes. It is hoped that the satellite constellation can provide observations at most places on the earth at least once every three hours, though practical considerations may force some compromises. The GPM system offers the possibility of providing precipitation maps with much better time resolution than the monthly averages around which TRMM was planned, and therefore opens up new possibilities for hydrology and data assimilation into models. In this talk, methods that were developed for estimating sampling error in the rainfall averages that TRMM is providing will be used to estimate sampling error levels for GPM-era configurations. Possible impacts on GPM products of compromises in the sampling frequency will be discussed.

  5. How important is mode-coupling in global surface wave tomography?

    NASA Astrophysics Data System (ADS)

    Mikesell, Dylan; Nolet, Guust; Voronin, Sergey; Ritsema, Jeroen; Van Heijst, Hendrik-Jan

    2016-04-01

    To investigate the influence of mode coupling for fundamental mode Rayleigh waves with periods between 64 and 174s, we analysed 3,505,902 phase measurements obtained along minor arc trajectories as well as 2,163,474 phases along major arcs. This is a selection of five frequency bands from the data set of Van Heijst and Woodhouse, extended with more recent earthquakes, that served to define upper mantle S velocity in model S40RTS. Since accurate estimation of the misfits (as represented by χ2) is essential, we used the method of Voronin et al. (GJI 199:276, 2014) to obtain objective estimates of the standard errors in this data set. We adapted Voronin's method slightly to avoid that systematic errors along clusters of raypaths can be accommodated by source corrections. This was done by simultaneously analysing multiple clusters of raypaths originating from the same group of earthquakes but traveling in different directions. For the minor arc data, phase errors at the one sigma level range from 0.26 rad at a period of 174s to 0.89 rad at 64s. For the major arcs, these errors are roughly twice as high (0.40 and 2.09 rad, respectively). In the subsequent inversion we removed any outliers that could not be fitted at the 3 sigma level in an almost undamped inversion. Using these error estimates and the theory of finite-frequency tomography to include the effects of scattering, we solved for models with χ2 = N (the number of data) both including and excluding the effect of mode coupling between Love and Rayleigh waves. We shall present some dramatic differences between the two models, notably near ocean-continent boundaries (e.g. California) where mode conversions are likely to be largest. But a sharpening of other features, such as cratons and high-velocity blobs in the oceanic domain, is also observed when mode coupling is taken into account. An investigation of the influence of coupling on azimuthal anisotropy is still under way at the time of writing of this abstract, but the results of this will be included in the presentation.

  6. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.

  7. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.

  8. Effects of model error on control of large flexible space antenna with comparisons of decoupled and linear quadratic regulator control procedures

    NASA Technical Reports Server (NTRS)

    Hamer, H. A.; Johnson, K. G.

    1986-01-01

    An analysis was performed to determine the effects of model error on the control of a large flexible space antenna. Control was achieved by employing two three-axis control-moment gyros (CMG's) located on the antenna column. State variables were estimated by including an observer in the control loop that used attitude and attitude-rate sensors on the column. Errors were assumed to exist in the individual model parameters: modal frequency, modal damping, mode slope (control-influence coefficients), and moment of inertia. Their effects on control-system performance were analyzed either for (1) nulling initial disturbances in the rigid-body modes, or (2) nulling initial disturbances in the first three flexible modes. The study includes the effects on stability, time to null, and control requirements (defined as maximum torque and total momentum), as well as on the accuracy of obtaining initial estimates of the disturbances. The effects on the transients of the undisturbed modes are also included. The results, which are compared for decoupled and linear quadratic regulator (LQR) control procedures, are shown in tabular form, parametric plots, and as sample time histories of modal-amplitude and control responses. Results of the analysis showed that the effects of model errors on the control-system performance were generally comparable for both control procedures. The effect of mode-slope error was the most serious of all model errors.

  9. Bowhead whale localization using asynchronous hydrophones in the Chukchi Sea.

    PubMed

    Warner, Graham A; Dosso, Stan E; Hannay, David E; Dettmer, Jan

    2016-07-01

    This paper estimates bowhead whale locations and uncertainties using non-linear Bayesian inversion of their modally-dispersed calls recorded on asynchronous recorders in the Chukchi Sea, Alaska. Bowhead calls were recorded on a cluster of 7 asynchronous ocean-bottom hydrophones that were separated by 0.5-9.2 km. A warping time-frequency analysis is used to extract relative mode arrival times as a function of frequency for nine frequency-modulated whale calls that dispersed in the shallow water environment. Each call was recorded on multiple hydrophones and the mode arrival times are inverted for: the whale location in the horizontal plane, source instantaneous frequency (IF), water sound-speed profile, seabed geoacoustic parameters, relative recorder clock drifts, and residual error standard deviations, all with estimated uncertainties. A simulation study shows that accurate prior environmental knowledge is not required for accurate localization as long as the inversion treats the environment as unknown. Joint inversion of multiple recorded calls is shown to substantially reduce uncertainties in location, source IF, and relative clock drift. Whale location uncertainties are estimated to be 30-160 m and relative clock drift uncertainties are 3-26 ms.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolaczkowski, A.M.; Lambright, J.A.; Ferrell, W.L.

    This document contains the internal event initiated accident sequence analyses for Peach Bottom, Unit 2; one of the reference plants being examined as part of the NUREG-1150 effort by the Nuclear Regulatory Commission. NUREG-1150 will document the risk of a selected group of nuclear power plants. As part of that work, this report contains the overall core damage frequency estimate for Peach Bottom, Unit 2, and the accompanying plant damage state frequencies. Sensitivity and uncertainty analyses provided additional insights regarding the dominant contributors to the Peach Bottom core damage frequency estimate. The mean core damage frequency at Peach Bottom wasmore » calculated to be 8.2E-6. Station blackout type accidents (loss of all ac power) were found to dominate the overall results. Anticipated Transient Without Scram accidents were also found to be non-negligible contributors. The numerical results are largely driven by common mode failure probability estimates and to some extent, human error. Because of significant data and analysis uncertainties in these two areas (important, for instance, to the most dominant scenario in this study), it is recommended that the results of the uncertainty and sensitivity analyses be considered before any actions are taken based on this analysis.« less

  11. Variation of ultrasound image lateral spectrum with assumed speed of sound and true scatterer density.

    PubMed

    Gyöngy, Miklós; Kollár, Sára

    2015-02-01

    One method of estimating sound speed in diagnostic ultrasound imaging consists of choosing the speed of sound that generates the sharpest image, as evaluated by the lateral frequency spectrum of the squared B-mode image. In the current work, simulated and experimental data on a typical (47 mm aperture, 3.3-10.0 MHz response) linear array transducer are used to investigate the accuracy of this method. A range of candidate speeds of sound (1240-1740 m/s) was used, with a true speed of sound of 1490 m/s in simulations and 1488 m/s in experiments. Simulations of single point scatterers and two interfering point scatterers at various locations with respect to each other gave estimate errors of 0.0-2.0%. Simulations and experiments of scatterer distributions with a mean scatterer spacing of at least 0.5 mm gave estimate errors of 0.1-4.0%. In the case of lower scatterer spacing, the speed of sound estimates become unreliable due to a decrease in contrast of the sharpness measure between different candidate speeds of sound. This suggests that in estimating speed of sound in tissue, the region of interest should be dominated by a few, sparsely spaced scatterers. Conversely, the decreasing sensitivity of the sharpness measure to speed of sound errors for higher scatterer concentrations suggests a potential method for estimating mean scatterer spacing. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Contribution to the theory of propeller vibrations

    NASA Technical Reports Server (NTRS)

    Liebers, F

    1930-01-01

    This report presents a calculation of the torsional frequencies of revolving bars with allowance for the air forces. Calculation of the flexural or bonding frequencies of revolving straight or tapered bars in terms of the angular velocity of revolution. Calculation on the basis of Rayleigh's principle of variation. There is also a discussion of error estimation and the accuracy of results. The author then provides an application of the theory to screw propellers for airplanes and the discusses the liability of propellers to damage through vibrations due to lack of uniform loading.

  13. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.

    PubMed

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-11-24

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.

  14. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  15. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  16. NEW STUDIES OF URBAN FLOOD FREQUENCY IN THE SOUTHEASTERN UNITED STATES.

    USGS Publications Warehouse

    Sauer, Vernon B.

    1986-01-01

    Five reports dealing with flood magnitude and frequency in urban areas in the southeastern United States have been published during the past 2 years by the U. S. Geological Survey (USGS). These reports are based on data collected in Tampa and Tallahassee, Florida; Atlanta, Georgia; and several cities in Alabama and Tennessee. Each report contains regression equations useful for estimating flood peaks for selected recurrence intervals at ungauged urban sites. A nationwide study of urban flood characteristics by the USGS published in 1983 contains equations for estimating urban peak discharges for ungauged sites. At the time that the nationwide study was conducted, data from only 35 sites in the southeastern United States were available. The five new reports contain data for 88 additional sites. These new data show that the seven-parameter estimating equations developed in the nationwide study are unbiased and have prediction errors less than those described in the nationwide report.

  17. Model-based spectral estimation of Doppler signals using parallel genetic algorithms.

    PubMed

    Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F

    2000-05-01

    Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.

  18. Large-scale frequency- and time-domain quantum entanglement over the optical frequency comb (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Pfister, Olivier

    2017-05-01

    When it comes to practical quantum computing, the two main challenges are circumventing decoherence (devastating quantum errors due to interactions with the environmental bath) and achieving scalability (as many qubits as needed for a real-life, game-changing computation). We show that using, in lieu of qubits, the "qumodes" represented by the resonant fields of the quantum optical frequency comb of an optical parametric oscillator allows one to create bona fide, large scale quantum computing processors, pre-entangled in a cluster state. We detail our recent demonstration of 60-qumode entanglement (out of an estimated 3000) and present an extension to combining this frequency-tagged with time-tagged entanglement, in order to generate an arbitrarily large, universal quantum computing processor.

  19. On Short-Time Estimation of Vocal Tract Length from Formant Frequencies

    PubMed Central

    Lammert, Adam C.; Narayanan, Shrikanth S.

    2015-01-01

    Vocal tract length is highly variable across speakers and determines many aspects of the acoustic speech signal, making it an essential parameter to consider for explaining behavioral variability. A method for accurate estimation of vocal tract length from formant frequencies would afford normalization of interspeaker variability and facilitate acoustic comparisons across speakers. A framework for considering estimation methods is developed from the basic principles of vocal tract acoustics, and an estimation method is proposed that follows naturally from this framework. The proposed method is evaluated using acoustic characteristics of simulated vocal tracts ranging from 14 to 19 cm in length, as well as real-time magnetic resonance imaging data with synchronous audio from five speakers whose vocal tracts range from 14.5 to 18.0 cm in length. Evaluations show improvements in accuracy over previously proposed methods, with 0.631 and 1.277 cm root mean square error on simulated and human speech data, respectively. Empirical results show that the effectiveness of the proposed method is based on emphasizing higher formant frequencies, which seem less affected by speech articulation. Theoretical predictions of formant sensitivity reinforce this empirical finding. Moreover, theoretical insights are explained regarding the reason for differences in formant sensitivity. PMID:26177102

  20. Improving patient safety through quality assurance.

    PubMed

    Raab, Stephen S

    2006-05-01

    Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.

  1. Automatic assessment of average diaphragm motion trajectory from 4DCT images through machine learning.

    PubMed

    Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O

    2015-12-01

    To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients ( f v ) to account for most of the spectrum energy (Σ f v 2 ). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth-the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The 'leave-one-out' cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves ( R = 91%-96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors, providing an automatic robust tool to evaluate diaphragm motion.

  2. Improving streamflow estimates through the use of LANDSAT. [Wisconsin and Pecatonica-Sugar River basins

    NASA Technical Reports Server (NTRS)

    Allord, G. J. (Principal Investigator); Scarpace, F. L.

    1981-01-01

    Estimates of low flow and flood frequency in several southwestern Wisconsin basins were improved by determining land cover from LANDSAT imagery. With the use of estimates of land cover in multiple-regression techniques, the standard error of estimate (SE) for the least annual 7-day low flow for 2- and 10-year recurrence intervals of ungaged sites were lowered by 9% each. The SE of flood frequency in the 'Driftless Area' of Wisconsin for 10-, 50-, and 100-year recurrence intervals were lowered by 14%. Four of nine basin characteristics determined from satellite imagery were significant variables in the multiple-regression techniques, whereas only 1 of the 12 characteristics determined from topographic maps was significant. The percentages of land cover categories in each basin were determined by merging basin boundaries, digitized from quadrangles, with a classified LANDSAT scene. Both the basin boundary X-Y polygon coordinates and the satellite coordinates were converted to latitude-longitude for merging compatibility.

  3. Data Quality Control Tools Applied to Seismo-Acoustic Arrays in Korea

    NASA Astrophysics Data System (ADS)

    Park, J.; Hayward, C.; Stump, B. W.

    2017-12-01

    We assess data quality (data gap, seismometer orientation, timing error, noise level and coherence between co-located sensors) for seismic and infrasound data in South Korea using six seismo-acoustic arrays, BRDAR, CHNAR, KSGAR, KMPAR, TJIAR, and YPDAR, cooperatively operated by Southern Methodist University and Korea Institute for Geosciences and Mineral Resources. Timing errors associated with seismometers can be found based on estimated changes in instrument orientation calculated from RMS errors between the reference array and each array seismometer using waveforms filtered from 0.1 to 0.35 Hz. Noise levels of seismic and infrasound data are analyzed to investigate local environmental effects and seasonal noise variation. In order to examine the spectral properties of the noise, the waveform are analyzed using Welch's method (Welch, 1967) that produces a single power spectral estimate from an average of spectra taken at regular intervals over a specific time period. This analysis quantifies the range of noise conditions found at each of the arrays over the given time period. We take an advantage of the fact that infrasound sensors are co-located or closely located to one another, which allows for a direct comparison of sensors, following the method by Ringler et al. (2010). The power level differences between two sensors at the same array in the frequency band of interest are used to monitor temporal changes in data quality and instrument conditions. A data quality factor is assigned to stations based on the average values of temporal changes estimated in the frequency and time domains. These monitoring tools enable us to automatically assess technical issue related to the instruments and data quality at each seismo-acoustic array as well as to investigate local environmental effects and seasonal variations in both seismic and infrasound data.

  4. Quantitative susceptibility mapping: Report from the 2016 reconstruction challenge.

    PubMed

    Langkammer, Christian; Schweser, Ferdinand; Shmueli, Karin; Kames, Christian; Li, Xu; Guo, Li; Milovic, Carlos; Kim, Jinsuh; Wei, Hongjiang; Bredies, Kristian; Buch, Sagar; Guo, Yihao; Liu, Zhe; Meineke, Jakob; Rauscher, Alexander; Marques, José P; Bilgic, Berkin

    2018-03-01

    The aim of the 2016 quantitative susceptibility mapping (QSM) reconstruction challenge was to test the ability of various QSM algorithms to recover the underlying susceptibility from phase data faithfully. Gradient-echo images of a healthy volunteer acquired at 3T in a single orientation with 1.06 mm isotropic resolution. A reference susceptibility map was provided, which was computed using the susceptibility tensor imaging algorithm on data acquired at 12 head orientations. Susceptibility maps calculated from the single orientation data were compared against the reference susceptibility map. Deviations were quantified using the following metrics: root mean squared error (RMSE), structure similarity index (SSIM), high-frequency error norm (HFEN), and the error in selected white and gray matter regions. Twenty-seven submissions were evaluated. Most of the best scoring approaches estimated the spatial frequency content in the ill-conditioned domain of the dipole kernel using compressed sensing strategies. The top 10 maps in each category had similar error metrics but substantially different visual appearance. Because QSM algorithms were optimized to minimize error metrics, the resulting susceptibility maps suffered from over-smoothing and conspicuity loss in fine features such as vessels. As such, the challenge highlighted the need for better numerical image quality criteria. Magn Reson Med 79:1661-1673, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. Lock-in amplifier error prediction and correction in frequency sweep measurements.

    PubMed

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2007-01-01

    This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.

  6. Structural Validation of a French Food Frequency Questionnaire of 94 Items.

    PubMed

    Gazan, Rozenn; Vieux, Florent; Darmon, Nicole; Maillot, Matthieu

    2017-01-01

    Food frequency questionnaires (FFQs) are used to estimate the usual food and nutrient intakes over a period of time. Such estimates can suffer from measurement errors, either due to bias induced by respondent's answers or to errors induced by the structure of the questionnaire (e.g., using a limited number of food items and an aggregated food database with average portion sizes). The "structural validation" presented in this study aims to isolate and quantify the impact of the inherent structure of a FFQ on the estimation of food and nutrient intakes, independently of respondent's perception of the questionnaire. A semi-quantitative FFQ ( n  = 94 items, including 50 items with questions on portion sizes) and an associated aggregated food composition database (named the item-composition database) were developed, based on the self-reported weekly dietary records of 1918 adults (18-79 years-old) in the French Individual and National Dietary Survey 2 (INCA2), and the French CIQUAL 2013 food-composition database of all the foods ( n  = 1342 foods) declared as consumed in the population. Reference intakes of foods ("REF_FOOD") and nutrients ("REF_NUT") were calculated for each adult using the food-composition database and the amounts of foods self-reported in his/her dietary record. Then, answers to the FFQ were simulated for each adult based on his/her self-reported dietary record. "FFQ_FOOD" and "FFQ_NUT" intakes were estimated using the simulated answers and the item-composition database. Measurement errors (in %), spearman correlations and cross-classification were used to compare "REF_FOOD" with "FFQ_FOOD" and "REF_NUT" with "FFQ_NUT". Compared to "REF_NUT," "FFQ_NUT" total quantity and total energy intake were underestimated on average by 198 g/day and 666 kJ/day, respectively. "FFQ_FOOD" intakes were well estimated for starches, underestimated for most of the subgroups, and overestimated for some subgroups, in particular vegetables. Underestimation were mainly due to the use of portion sizes, leading to an underestimation of most of nutrients, except free sugars which were overestimated. The "structural validation" by simulating answers to a FFQ based on a reference dietary survey is innovative and pragmatic and allows quantifying the error induced by the simplification of the method of collection.

  7. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty.

    PubMed

    Lash, Timothy L

    2007-11-26

    The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is likely to lead to overconfidence regarding the potential for causal associations, whereas the former safeguards against such overinterpretations. Furthermore, such analyses, once programmed, allow rapid implementation of alternative assignments of probability distributions to the bias parameters, so elevate the plane of discussion regarding study bias from characterizing studies as "valid" or "invalid" to a critical and quantitative discussion of sources of uncertainty.

  8. Incorrect evaluation of the frequency of malnutrition and of its screening in hospitalized children by health care professionals.

    PubMed

    Restier, Lioara; Duclos, Antoine; Jarri, Laura; Touzet, Sandrine; Denis, Angelique; Occelli, Pauline; Kassai-Koupai, Behrouz; Lachaux, Alain; Loras-Duclaux, Irene; Colin, Cyrille; Peretti, Noel

    2015-10-01

    Malnutrition screening is essential to detect and to treat patients with stunting or wasting. The aim was to evaluate the subjective perception of frequency and assessment of malnutrition by health care professionals. In a paediatric university hospital, a cross-sectional survey was conducted with a Likert scale approach to health care professionals and compared with objective measurements on a given day of frequency of malnutrition and of its screening. 279 health care professionals participated. The malnutrition rate, estimated versus measured, was 16.8% and 34.8%, respectively. Conversely, the estimated frequency of malnutrition screening versus measured frequency was 80.6% versus 43.1%, respectively. Furthermore, the perception of health care professionals did not differ depending on their professional category or speciality. In conclusion, health care staff underestimates the prevalence of malnutrition in children by half and overestimates the frequency of appropriate screening practices for detection of malnutrition. This flawed/unreliable perception may disrupt both screening and the management of malnourished children. There is an urgent need to find out the reasons behind these errors caused by subjective perception in order to develop appropriate educational training to remedy the situation. © 2015 John Wiley & Sons, Ltd.

  9. Method of estimating flood-frequency parameters for streams in Idaho

    USGS Publications Warehouse

    Kjelstrom, L.C.; Moffatt, R.L.

    1981-01-01

    Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)

  10. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    PubMed Central

    Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  11. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    PubMed

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-04-18

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  12. Energy-efficient quantum frequency estimation

    NASA Astrophysics Data System (ADS)

    Liuzzo-Scorpo, Pietro; Correa, Luis A.; Pollock, Felix A.; Górecka, Agnieszka; Modi, Kavan; Adesso, Gerardo

    2018-06-01

    The problem of estimating the frequency of a two-level atom in a noisy environment is studied. Our interest is to minimise both the energetic cost of the protocol and the statistical uncertainty of the estimate. In particular, we prepare a probe in a ‘GHZ-diagonal’ state by means of a sequence of qubit gates applied on an ensemble of n atoms in thermal equilibrium. Noise is introduced via a phenomenological time-non-local quantum master equation, which gives rise to a phase-covariant dissipative dynamics. After an interval of free evolution, the n-atom probe is globally measured at an interrogation time chosen to minimise the error bars of the final estimate. We model explicitly a measurement scheme which becomes optimal in a suitable parameter range, and are thus able to calculate the total energetic expenditure of the protocol. Interestingly, we observe that scaling up our multipartite entangled probes offers no precision enhancement when the total available energy {\\boldsymbol{ \\mathcal E }} is limited. This is at stark contrast with standard frequency estimation, where larger probes—more sensitive but also more ‘expensive’ to prepare—are always preferred. Replacing {\\boldsymbol{ \\mathcal E }} by the resource that places the most stringent limitation on each specific experimental setup, would thus help to formulate more realistic metrological prescriptions.

  13. An estimator-predictor approach to PLL loop filter design

    NASA Technical Reports Server (NTRS)

    Statman, J. I.; Hurd, W. J.

    1986-01-01

    An approach to the design of digital phase locked loops (DPLLs), using estimation theory concepts in the selection of a loop filter, is presented. The key concept is that the DPLL closed-loop transfer function is decomposed into an estimator and a predictor. The estimator provides recursive estimates of phase, frequency, and higher order derivatives, while the predictor compensates for the transport lag inherent in the loop. This decomposition results in a straightforward loop filter design procedure, enabling use of techniques from optimal and sub-optimal estimation theory. A design example for a particular choice of estimator is presented, followed by analysis of the associated bandwidth, gain margin, and steady state errors caused by unmodeled dynamics. This approach is under consideration for the design of the Deep Space Network (DSN) Advanced Receiver Carrier DPLL.

  14. Ionospheric Impacts on UHF Space Surveillance

    NASA Astrophysics Data System (ADS)

    Jones, J. C.

    2017-12-01

    Earth's atmosphere contains regions of ionized plasma caused by the interaction of highly energetic solar radiation. This region of ionization is called the ionosphere and varies significantly with altitude, latitude, local solar time, season, and solar cycle. Significant ionization begins at about 100 km (E layer) with a peak in the ionization at about 300 km (F2 layer). Above the F2 layer, the atmosphere is mostly ionized but the ion and electron densities are low due to the unavailability of neutral molecules for ionization so the density decreases exponentially with height to well over 1000 km. The gradients of these variations in the ionosphere play a significant role in radio wave propagation. These gradients induce variations in the index of refraction and cause some radio waves to refract. The amount of refraction depends on the magnitude and direction of the electron density gradient and the frequency of the radio wave. The refraction is significant at HF frequencies (3-30 MHz) with decreasing effects toward the UHF (300-3000 MHz) range. UHF is commonly used for tracking of space objects in low Earth orbit (LEO). While ionospheric refraction is small for UHF frequencies, it can cause errors in range, azimuth angle, and elevation angle estimation by ground-based radars tracking space objects. These errors can cause significant errors in precise orbit determinations. For radio waves transiting the ionosphere, it is important to understand and account for these effects. Using a sophisticated radio wave propagation tool suite and an empirical ionospheric model, we calculate the errors induced by the ionosphere in a simulation of a notional space surveillance radar tracking objects in LEO. These errors are analyzed to determine daily, monthly, annual, and solar cycle trends. Corrections to surveillance radar measurements can be adapted from our simulation capability.

  15. A systematic uncertainty analysis for liner impedance eduction technology

    NASA Astrophysics Data System (ADS)

    Zhou, Lin; Bodén, Hans

    2015-11-01

    The so-called impedance eduction technology is widely used for obtaining acoustic properties of liners used in aircraft engines. The measurement uncertainties for this technology are still not well understood though it is essential for data quality assessment and model validation. A systematic framework based on multivariate analysis is presented in this paper to provide 95 percent confidence interval uncertainty estimates in the process of impedance eduction. The analysis is made using a single mode straightforward method based on transmission coefficients involving the classic Ingard-Myers boundary condition. The multivariate technique makes it possible to obtain an uncertainty analysis for the possibly correlated real and imaginary parts of the complex quantities. The results show that the errors in impedance results at low frequency mainly depend on the variability of transmission coefficients, while the mean Mach number accuracy is the most important source of error at high frequencies. The effect of Mach numbers used in the wave dispersion equation and in the Ingard-Myers boundary condition has been separated for comparison of the outcome of impedance eduction. A local Mach number based on friction velocity is suggested as a way to reduce the inconsistencies found when estimating impedance using upstream and downstream acoustic excitation.

  16. H∞ filter design for nonlinear systems with quantised measurements in finite frequency domain

    NASA Astrophysics Data System (ADS)

    El Hellani, D.; El Hajjaji, A.; Ceschi, R.

    2017-04-01

    This paper deals with the problem of finite frequency (FF) H∞ full-order fuzzy filter design for nonlinear discrete-time systems with quantised measurements, described by Takagi-Sugeno models. The measured signals are assumed to be quantised with a logarithmic quantiser. Using a fuzzy-basis-dependent Lyapunov function, the finite frequency ℓ2 gain definition, the generalised S-procedure, and Finsler's lemma, a set of sufficient conditions are established in terms of matrix inequalities, ensuring that the filtering error system is stable and the H∞ attenuation level, from disturbance to the estimation error, is smaller than a given value over a prescribed finite frequency domain of the external disturbances. With the aid of Finsler's lemma, a large number of slack variables are introduced to the design conditions, which provides extra degrees of freedom in optimising the guaranteed H∞ performance. This directly leads to performance improvement and reduction of conservatism. Finally, we give a simulation example to demonstrate the efficiency of the proposed design method, and we show that a lower H∞ attenuation level can be obtained by our developed approach in comparison with another result in the literature.

  17. Multi-spacecraft coherent Doppler and ranging for interplanetary-navigation

    NASA Technical Reports Server (NTRS)

    Pollmeier, Vincent M.

    1995-01-01

    Future plans for planetary exploration currently include using multiple spacecraft to simultaneously explore one planet. This never before encountered situation places new demands on tracking systems used to support navigation. One possible solution to the problem of heavy ground resource conflicts is the use of multispacecraft coherent radio metric data, also known as, bent-pipe data. Analysis of the information content of these data types show that the information content of multi-spacecraft Doppler is dependent only on the frequency of the final downlink leg and is independent of the frequencies used on other legs. Numerical analysis shows that coherent bent-pipe data can provide significantly better capability to estimate the location of a lander on the surface of Mars than can direct lander to Earth radio metric data. However, this is complicated by difficulties in separating the effect of a lander position error from that of an orbiter position error for single passes of data.

  18. Reverberant acoustic energy in auditoria that comprise systems of coupled rooms

    NASA Astrophysics Data System (ADS)

    Summers, Jason E.

    2003-11-01

    A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.

  19. Estimation of peak-discharge frequency of urban streams in Jefferson County, Kentucky

    USGS Publications Warehouse

    Martin, Gary R.; Ruhl, Kevin J.; Moore, Brian L.; Rose, Martin F.

    1997-01-01

    An investigation of flood-hydrograph characteristics for streams in urban Jefferson County, Kentucky, was made to obtain hydrologic information needed for waterresources management. Equations for estimating peak-discharge frequencies for ungaged streams in the county were developed by combining (1) long-term annual peakdischarge data and rainfall-runoff data collected from 1991 to 1995 in 13 urban basins and (2) long-term annual peak-discharge data in four rural basins located in hydrologically similar areas of neighboring counties. The basins ranged in size from 1.36 to 64.0 square miles. The U.S. Geological Survey Rainfall- Runoff Model (RRM) was calibrated for each of the urban basins. The calibrated models were used with long-term, historical rainfall and pan-evaporation data to simulate 79 years of annual peak-discharge data. Peak-discharge frequencies were estimated by fitting the logarithms of the annual peak discharges to a Pearson-Type III frequency distribution. The simulated peak-discharge frequencies were adjusted for improved reliability by application of bias-correction factors derived from peakdischarge frequencies based on local, observed annual peak discharges. The three-parameter and the preferred seven-parameter nationwide urban-peak-discharge regression equations previously developed by USGS investigators provided biased (high) estimates for the urban basins studied. Generalized-least-square regression procedures were used to relate peakdischarge frequency to selected basin characteristics. Regression equations were developed to estimate peak-discharge frequency by adjusting peak-dischargefrequency estimates made by use of the threeparameter nationwide urban regression equations. The regression equations are presented in equivalent forms as functions of contributing drainage area, main-channel slope, and basin development factor, which is an index for measuring the efficiency of the basin drainage system. Estimates of peak discharges for streams in the county can be made for the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals by use of the regression equations. The average standard errors of prediction of the regression equations ranges from ? 34 to ? 45 percent. The regression equations are applicable to ungaged streams in the county having a specific range of basin characteristics.

  20. A comparison of linear respiratory system models based on parameter estimates from PRN forced oscillation data.

    PubMed

    Diong, B; Grainger, J; Goldman, M; Nazeran, H

    2009-01-01

    The forced oscillation technique offers some advantages over spirometry for assessing pulmonary function. It requires only passive patient cooperation; it also provides data in a form, frequency-dependent impedance, which is very amenable to engineering analysis. In particular, the data can be used to obtain parameter estimates for electric circuit-based models of the respiratory system, which can in turn aid the detection and diagnosis of various diseases/pathologies. In this study, we compare the least-squares error performance of the RIC, extended RIC, augmented RIC, augmented RIC+I(p), DuBois, Nagels and Mead models in fitting 3 sets of impedance data. These data were obtained by pseudorandom noise forced oscillation of healthy subjects, mild asthmatics and more severe asthmatics. We found that the aRIC+I(p) and DuBois models yielded the lowest fitting errors (for the healthy subjects group and the 2 asthmatic patient groups, respectively) without also producing unphysiologically large component estimates.

  1. Rapid Non-Gaussian Uncertainty Quantification of Seismic Velocity Models and Images

    NASA Astrophysics Data System (ADS)

    Ely, G.; Malcolm, A. E.; Poliannikov, O. V.

    2017-12-01

    Conventional seismic imaging typically provides a single estimate of the subsurface without any error bounds. Noise in the observed raw traces as well as the uncertainty of the velocity model directly impact the uncertainty of the final seismic image and its resulting interpretation. We present a Bayesian inference framework to quantify uncertainty in both the velocity model and seismic images, given noise statistics of the observed data.To estimate velocity model uncertainty, we combine the field expansion method, a fast frequency domain wave equation solver, with the adaptive Metropolis-Hastings algorithm. The speed of the field expansion method and its reduced parameterization allows us to perform the tens or hundreds of thousands of forward solves needed for non-parametric posterior estimations. We then migrate the observed data with the distribution of velocity models to generate uncertainty estimates of the resulting subsurface image. This procedure allows us to create both qualitative descriptions of seismic image uncertainty and put error bounds on quantities of interest such as the dip angle of a subduction slab or thickness of a stratigraphic layer.

  2. Estimating error statistics for Chambon-la-Forêt observatory definitive data

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly

    2017-08-01

    We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.

  3. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  4. Evaluation of Methods Used for Estimating Selected Streamflow Statistics, and Flood Frequency and Magnitude, for Small Basins in North Coastal California

    USGS Publications Warehouse

    Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard

    2004-01-01

    Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.

  5. Influence of sampling frequency and load calculation methods on quantification of annual river nutrient and suspended solids loads.

    PubMed

    Elwan, Ahmed; Singh, Ranvir; Patterson, Maree; Roygard, Jon; Horne, Dave; Clothier, Brent; Jones, Geoffrey

    2018-01-11

    Better management of water quality in streams, rivers and lakes requires precise and accurate estimates of different contaminant loads. We assessed four sampling frequencies (2 days, weekly, fortnightly and monthly) and five load calculation methods (global mean (GM), rating curve (RC), ratio estimator (RE), flow-stratified (FS) and flow-weighted (FW)) to quantify loads of nitrate-nitrogen (NO 3 - -N), soluble inorganic nitrogen (SIN), total nitrogen (TN), dissolved reactive phosphorus (DRP), total phosphorus (TP) and total suspended solids (TSS), in the Manawatu River, New Zealand. The estimated annual river loads were compared to the reference 'true' loads, calculated using daily measurements of flow and water quality from May 2010 to April 2011, to quantify bias (i.e. accuracy) and root mean square error 'RMSE' (i.e. accuracy and precision). The GM method resulted into relatively higher RMSE values and a consistent negative bias (i.e. underestimation) in estimates of annual river loads across all sampling frequencies. The RC method resulted in the lowest RMSE for TN, TP and TSS at monthly sampling frequency. Yet, RC highly overestimated the loads for parameters that showed dilution effect such as NO 3 - -N and SIN. The FW and RE methods gave similar results, and there was no essential improvement in using RE over FW. In general, FW and RE performed better than FS in terms of bias, but FS performed slightly better than FW and RE in terms of RMSE for most of the water quality parameters (DRP, TP, TN and TSS) using a monthly sampling frequency. We found no significant decrease in RMSE values for estimates of NO 3 - N, SIN, TN and DRP loads when the sampling frequency was increased from monthly to fortnightly. The bias and RMSE values in estimates of TP and TSS loads (estimated by FW, RE and FS), however, showed a significant decrease in the case of weekly or 2-day sampling. This suggests potential for a higher sampling frequency during flow peaks for more precise and accurate estimates of annual river loads for TP and TSS, in the study river and other similar conditions.

  6. Simulation Study of a Follow-on Gravity Mission to GRACE

    NASA Technical Reports Server (NTRS)

    Loomis, Bryant D.; Nerem, R. S.; Luthcke, Scott B.

    2012-01-01

    The gravity recovery and climate experiment (GRACE) has been providing monthly estimates of the Earth's time-variable gravity field since its launch in March 2002. The GRACE gravity estimates are used to study temporal mass variations on global and regional scales, which are largely caused by a redistribution of water mass in the Earth system. The accuracy of the GRACE gravity fields are primarily limited by the satellite-to-satellite range-rate measurement noise, accelerometer errors, attitude errors, orbit errors, and temporal aliasing caused by unmodeled high-frequency variations in the gravity signal. Recent work by Ball Aerospace and Technologies Corp., Boulder, CO has resulted in the successful development of an interferometric laser ranging system to specifically address the limitations of the K-band microwave ranging system that provides the satellite-to-satellite measurements for the GRACE mission. Full numerical simulations are performed for several possible configurations of a GRACE Follow-On (GFO) mission to determine if a future satellite gravity recovery mission equipped with a laser ranging system will provide better estimates of time-variable gravity, thus benefiting many areas of Earth systems research. The laser ranging system improves the range-rate measurement precision to approximately 0.6 nm/s as compared to approx. 0.2 micro-seconds for the GRACE K-band microwave ranging instrument. Four different mission scenarios are simulated to investigate the effect of the better instrument at two different altitudes. The first pair of simulated missions is flown at GRACE altitude (approx. 480 km) assuming on-board accelerometers with the same noise characteristics as those currently used for GRACE. The second pair of missions is flown at an altitude of approx. 250 km which requires a drag-free system to prevent satellite re-entry. In addition to allowing a lower satellite altitude, the drag-free system also reduces the errors associated with the accelerometer. All simulated mission scenarios assume a two satellite co-orbiting pair similar to GRACE in a near-polar, near-circular orbit. A method for local time variable gravity recovery through mass concentration blocks (mascons) is used to form simulated gravity estimates for Greenland and the Amazon region for three GFO configurations and GRACE. Simulation results show that the increased precision of the laser does not improve gravity estimation when flown with on-board accelerometers at the same altitude and spacecraft separation as GRACE, even when time-varying background models are not included. This study also shows that only modest improvement is realized for the best-case scenario (laser, low-altitude, drag-free) as compared to GRACE due to temporal aliasing errors. These errors are caused by high-frequency variations in the hydrology signal and imperfections in the atmospheric, oceanographic, and tidal models which are used to remove unwanted signal. This work concludes that applying the updated technologies alone will not immediately advance the accuracy of the gravity estimates. If the scientific objectives of a GFO mission require more accurate gravity estimates, then future work should focus on improvements in the geophysical models, and ways in which the mission design or data processing could reduce the effects of temporal aliasing.

  7. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  8. Tutorial: Asteroseismic Stellar Modelling with AIMS

    NASA Astrophysics Data System (ADS)

    Lund, Mikkel N.; Reese, Daniel R.

    The goal of aims (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm—interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. aims is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.

  9. Correlated errors in geodetic time series: Implications for time-dependent deformation

    USGS Publications Warehouse

    Langbein, J.; Johnson, H.

    1997-01-01

    Analysis of frequent trilateration observations from the two-color electronic distance measuring networks in California demonstrate that the noise power spectra are dominated by white noise at higher frequencies and power law behavior at lower frequencies. In contrast, Earth scientists typically have assumed that only white noise is present in a geodetic time series, since a combination of infrequent measurements and low precision usually preclude identifying the time-correlated signature in such data. After removing a linear trend from the two-color data, it becomes evident that there are primarily two recognizable types of time-correlated noise present in the residuals. The first type is a seasonal variation in displacement which is probably a result of measuring to shallow surface monuments installed in clayey soil which responds to seasonally occurring rainfall; this noise is significant only for a small fraction of the sites analyzed. The second type of correlated noise becomes evident only after spectral analysis of line length changes and shows a functional relation at long periods between power and frequency of and where f is frequency and ?? ??? 2. With ?? = 2, this type of correlated noise is termed random-walk noise, and its source is mainly thought to be small random motions of geodetic monuments with respect to the Earth's crust, though other sources are possible. Because the line length changes in the two-color networks are measured at irregular intervals, power spectral techniques cannot reliably estimate the level of I//" noise. Rather, we also use here a maximum likelihood estimation technique which assumes that there are only two sources of noise in the residual time series (white noise and randomwalk noise) and estimates the amount of each. From this analysis we find that the random-walk noise level averages about 1.3 mm/Vyr and that our estimates of the white noise component confirm theoretical limitations of the measurement technique. In addition, the seasonal noise can be as large as 3 mm in amplitude but typically is less than 0.5 mm. Because of the presence of random-walk noise in these time series, modeling and interpretation of the geodetic data must account for this source of error. By way of example we show that estimating the time-varying strain tensor (a form of spatial averaging) from geodetic data having both random-walk and white noise error components results in seemingly significant variations in the rate of strain accumulation; spatial averaging does reduce the size of both noise components but not their relative influence on the resulting strain accumulation model. Copyright 1997 by the American Geophysical Union.

  10. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  11. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  12. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  13. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  14. A review of the application of nonattenuating frequency radars for estimating rain attenuation and space-diversity performance

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1979-01-01

    Cumulative rain fade statistics are used by space communications engineers to establish transmitter power and receiver sensitivities for systems operating under various geometries, climates, and radio frequencies. Space-diversity performance criteria are also of interest. This work represents a review, in which are examined the many elements involved in the employment of single nonattenuating frequency radars for arriving at the desired information. The elements examined include radar techniques and requirements, phenomenological assumptions, path attenuation formulations and procedures, as well as error budgeting and calibration analysis. Included are the pertinent results of previous investigators who have used radar for rain-attenuation modeling. Suggestions are made for improving present methods.

  15. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. The measurement of linear frequency drift in oscillators

    NASA Astrophysics Data System (ADS)

    Barnes, J. A.

    1985-04-01

    A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.

  17. Empirical Validation of Pooled Whole Genome Population Re-Sequencing in Drosophila melanogaster

    PubMed Central

    Zhu, Yuan; Bergland, Alan O.; González, Josefa; Petrov, Dmitri A.

    2012-01-01

    The sequencing of pooled non-barcoded individuals is an inexpensive and efficient means of assessing genome-wide population allele frequencies, yet its accuracy has not been thoroughly tested. We assessed the accuracy of this approach on whole, complex eukaryotic genomes by resequencing pools of largely isogenic, individually sequenced Drosophila melanogaster strains. We called SNPs in the pooled data and estimated false positive and false negative rates using the SNPs called in individual strain as a reference. We also estimated allele frequency of the SNPs using “pooled” data and compared them with “true” frequencies taken from the estimates in the individual strains. We demonstrate that pooled sequencing provides a faithful estimate of population allele frequency with the error well approximated by binomial sampling, and is a reliable means of novel SNP discovery with low false positive rates. However, a sufficient number of strains should be used in the pooling because variation in the amount of DNA derived from individual strains is a substantial source of noise when the number of pooled strains is low. Our results and analysis confirm that pooled sequencing is a very powerful and cost-effective technique for assessing of patterns of sequence variation in populations on genome-wide scales, and is applicable to any dataset where sequencing individuals or individual cells is impossible, difficult, time consuming, or expensive. PMID:22848651

  18. Rain rate range profiling from a spaceborne radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.

    1980-01-01

    At certain frequencies and incidence angles the relative invariance of the surface scattering properites over land can be used to estimate the total attenuation and the integrated rain from a spaceborne attenuation-wavelength radar. The technique is generalized so that rain rate profiles along the radar beam can be estimated, i.e., rain rate determination at each range bin. This is done by modifying the standard algorithm for an attenuating-wavelength radar to include in it the measurement of the total attenuation. Simple error analyses of the estimates show that this type of profiling is possible if the total attenuation can be measured with a modest degree of accuracy.

  19. Preliminary GPS orbit determination results for the Extreme Ultraviolet Explorer

    NASA Technical Reports Server (NTRS)

    Gold, Kenn; Bertiger, Willy; Wu, Sien; Yunck, Tom

    1993-01-01

    A single-frequency Motorola Global Positioning System (GPS) receiver was launched with the Extreme Ultraviolet Explorer mission in June 1992. The receiver utilizes dual GPS antennas placed on opposite sides of the satellite to obtain full GPS coverage as it rotates during its primary scanning mission. A data set from this GPS experiment has been processed at the Jet Propulsion Laboratory with the GIPSY-OASIS 2 software package. The single-frequency, dual antenna approach and the low altitude (approximately 500 km) orbit of the satellite create special problems for the GPS orbit determination analysis. The low orbit implies that the dynamics of the spacecraft will be difficult to model, and that atmospheric drag will be an important error source. A reduced-dynamic solution technique was investigated in which ad hoc accelerations were estimated at each time step to absorb dynamic model error. In addition, a single-frequency ionospheric correction was investigated, and a cycle-slip detector was written. Orbit accuracy is currently better than 5 m. Further optimization should improve this to about 1 m.

  20. Photonics-based microwave frequency measurement using a double-sideband suppressed-carrier modulation and an InP integrated ring-assisted Mach-Zehnder interferometer filter.

    PubMed

    Fandiño, Javier S; Muñoz, Pascual

    2013-11-01

    A photonic system capable of estimating the unknown frequency of a CW microwave tone is presented. The core of the system is a complementary optical filter monolithically integrated in InP, consisting of a ring-assisted Mach-Zehnder interferometer with a second-order elliptic response. By simultaneously measuring the different optical powers produced by a double-sideband suppressed-carrier modulation at the outputs of the photonic integrated circuit, an amplitude comparison function that depends on the input tone frequency is obtained. Using this technique, a frequency measurement range of 10 GHz (5-15 GHz) with a root mean square value of frequency error lower than 200 MHz is experimentally demonstrated. Moreover, simulations showing the impact of a residual optical carrier on system performance are also provided.

  1. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.

  2. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  3. Wave front sensing for next generation earth observation telescope

    NASA Astrophysics Data System (ADS)

    Delvit, J.-M.; Thiebaut, C.; Latry, C.; Blanchet, G.

    2017-09-01

    High resolution observations systems are highly dependent on optics quality and are usually designed to be nearly diffraction limited. Such a performance allows to set a Nyquist frequency closer to the cut off frequency, or equivalently to minimize the pupil diameter for a given ground sampling distance target. Up to now, defocus is the only aberration that is allowed to evolve slowly and that may be inflight corrected, using an open loop correction based upon ground estimation and refocusing command upload. For instance, Pleiades satellites defocus is assessed from star acquisitions and refocusing is done with a thermal actuation of the M2 mirror. Next generation systems under study at CNES should include active optics in order to allow evolving aberrations not only limited to defocus, due for instance to in orbit thermal variable conditions. Active optics relies on aberration estimations through an onboard Wave Front Sensor (WFS). One option is using a Shack Hartmann. The Shack-Hartmann wave-front sensor could be used on extended scenes (unknown landscapes). A wave-front computation algorithm should then be implemented on-board the satellite to provide the control loop wave-front error measure. In the worst case scenario, this measure should be computed before each image acquisition. A robust and fast shift estimation algorithm between Shack-Hartmann images is then needed to fulfill this last requirement. A fast gradient-based algorithm using optical flows with a Lucas-Kanade method has been studied and implemented on an electronic device developed by CNES. Measurement accuracy depends on the Wave Front Error (WFE), the landscape frequency content, the number of searched aberrations, the a priori knowledge of high order aberrations and the characteristics of the sensor. CNES has realized a full scale sensitivity analysis on the whole parameter set with our internally developed algorithm.

  4. Validation of Satellite Precipitation (trmm 3B43) in Ecuadorian Coastal Plains, Andean Highlands and Amazonian Rainforest

    NASA Astrophysics Data System (ADS)

    Ballari, D.; Castro, E.; Campozano, L.

    2016-06-01

    Precipitation monitoring is of utmost importance for water resource management. However, in regions of complex terrain such as Ecuador, the high spatio-temporal precipitation variability and the scarcity of rain gauges, make difficult to obtain accurate estimations of precipitation. Remotely sensed estimated precipitation, such as the Multi-satellite Precipitation Analysis TRMM, can cope with this problem after a validation process, which must be representative in space and time. In this work we validate monthly estimates from TRMM 3B43 satellite precipitation (0.25° x 0.25° resolution), by using ground data from 14 rain gauges in Ecuador. The stations are located in the 3 most differentiated regions of the country: the Pacific coastal plains, the Andean highlands, and the Amazon rainforest. Time series, between 1998 - 2010, of imagery and rain gauges were compared using statistical error metrics such as bias, root mean square error, and Pearson correlation; and with detection indexes such as probability of detection, equitable threat score, false alarm rate and frequency bias index. The results showed that precipitation seasonality is well represented and TRMM 3B43 acceptably estimates the monthly precipitation in the three regions of the country. According to both, statistical error metrics and detection indexes, the coastal and Amazon regions are better estimated quantitatively than the Andean highlands. Additionally, it was found that there are better estimations for light precipitation rates. The present validation of TRMM 3B43 provides important results to support further studies on calibration and bias correction of precipitation in ungagged watershed basins.

  5. The experimental results of a self tuning adaptive controller using online frequency identification. [for Galileo spacecraft

    NASA Technical Reports Server (NTRS)

    Chiang, W.-W.; Cannon, R. H., Jr.

    1985-01-01

    A fourth-order laboratory dynamic system featuring very low structural damping and a noncolocated actuator-sensor pair has been used to test a novel real-time adaptive controller, implemented in a minicomputer, which consists of a state estimator, a set of state feedback gains, and a frequency-locked loop for real-time parameter identification. The adaptation algorithm employed can correct controller error and stabilize the system for more than 50 percent variation in the plant's natural frequency, compared with a 10 percent stability margin in frequency variation for a fixed gain controller having the same performance as the nominal plant condition. The very rapid convergence achievable by this adaptive system is demonstrated experimentally, and proven with simple, root-locus methods.

  6. Gait analysis--precise, rapid, automatic, 3-D position and orientation kinematics and dynamics.

    PubMed

    Mann, R W; Antonsson, E K

    1983-01-01

    A fully automatic optoelectronic photogrammetric technique is presented for measuring the spatial kinematics of human motion (both position and orientation) and estimating the inertial (net) dynamics. Calibration and verification showed that in a two-meter cube viewing volume, the system achieves one millimeter of accuracy and resolution in translation and 20 milliradians in rotation. Since double differentiation of generalized position data to determine accelerations amplifies noise, the frequency domain characteristics of the system were investigated. It was found that the noise and all other errors in the kinematic data contribute less than five percent error to the resulting dynamics.

  7. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    PubMed

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  8. Estimation of automobile-driver describing function from highway tests using the double steering wheel

    NASA Technical Reports Server (NTRS)

    Delp, P.; Crossman, E. R. F. W.; Szostak, H.

    1972-01-01

    The automobile-driver describing function for lateral position control was estimated for three subjects from frequency response analysis of straight road test results. The measurement procedure employed an instrumented full size sedan with known steering response characteristics, and equipped with a lateral lane position measuring device based on video detection of white stripe lane markings. Forcing functions were inserted through a servo driven double steering wheel coupling the driver to the steering system proper. Random appearing, Gaussian, and transient time functions were used. The quasi-linear models fitted to the random appearing input frequency response characterized the driver as compensating for lateral position error in a proportional, derivative, and integral manner. Similar parameters were fitted to the Gabor transformed frequency response of the driver to transient functions. A fourth term corresponding to response to lateral acceleration was determined by matching the time response histories of the model to the experimental results. The time histories show evidence of pulse-like nonlinear behavior during extended response to step transients which appear as high frequency remnant power.

  9. Ball bearing vibrations amplitude modeling and test comparisons

    NASA Technical Reports Server (NTRS)

    Hightower, Richard A., III; Bailey, Dave

    1995-01-01

    Bearings generate disturbances that, when combined with structural gains of a momentum wheel, contribute to induced vibration in the wheel. The frequencies generated by a ball bearing are defined by the bearing's geometry and defects. The amplitudes at these frequencies are dependent upon the actual geometry variations from perfection; therefore, a geometrically perfect bearing will produce no amplitudes at the kinematic frequencies that the design generates. Because perfect geometry can only be approached, emitted vibrations do occur. The most significant vibration is at the spin frequency and can be balanced out in the build process. Other frequencies' amplitudes, however, cannot be balanced out. Momentum wheels are usually the single largest source of vibrations in a spacecraft and can contribute to pointing inaccuracies if emitted vibrations ring the structure or are in the high-gain bandwidth of a sensitive pointing control loop. It is therefore important to be able to provide an a priori knowledge of possible amplitudes that are singular in source or are a result of interacting defects that do not reveal themselves in normal frequency prediction equations. This paper will describe the computer model that provides for the incorporation of bearing geometry errors and then develops an estimation of actual amplitudes and frequencies. Test results were correlated with the model. A momentum wheel was producing an unacceptable 74 Hz amplitude. The model was used to simulate geometry errors and proved successful in identifying a cause that was verified when the parts were inspected.

  10. Speech Enhancement Using Gaussian Scale Mixture Models

    PubMed Central

    Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.

    2011-01-01

    This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139

  11. Piezoelectric Instruments of High Natural Frequency Vibration Characteristics and Protection Against Interference by Mass Forces

    NASA Technical Reports Server (NTRS)

    Gohlka, Werner

    1943-01-01

    The exploration of the processes accompanying engine combustion demands quick-responding pressure-recording instruments, among which the piezoelectric type has found widespread use because of its especially propitious properties as vibration-recording instruments for high frequencies. Lacking appropriate test methods, the potential errors of piezoelectric recorders in dynamic measurements could only be estimated up to now. In the present report a test method is described by means of which the resonance curves of the piezoelectric pickup can be determined; hence an instrumental appraisal of the vibration characteristics of piezoelectric recorders is obtainable.

  12. A Monte Carlo Study of Levene's Test of Homogeneity of Variance: Empirical Frequencies of Type I Error in Normal Distributions.

    ERIC Educational Resources Information Center

    Neel, John H.; Stallings, William M.

    An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…

  13. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform

    PubMed Central

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-01-01

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm. PMID:29438317

  14. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.

    PubMed

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-02-13

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.

  15. Effects of errors and gaps in spatial data sets on assessment of conservation progress.

    PubMed

    Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C

    2013-10-01

    Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.

  16. Portable bioimpedance monitor evaluation for continuous impedance measurements. Towards wearable plethysmography applications.

    PubMed

    Ferreira, J; Seoane, F; Lindecrantz, K

    2013-01-01

    Personalised Health Systems (PHS) that could benefit the life quality of the patients as well as decreasing the health care costs for society among other factors are arisen. The purpose of this paper is to study the capabilities of the System-on-Chip Impedance Network Analyser AD5933 performing high speed single frequency continuous bioimpedance measurements. From a theoretical analysis, the minimum continuous impedance estimation time was determined, and the AD5933 with a custom 4-Electrode Analog Front-End (AFE) was used to experimentally determine the maximum continuous impedance estimation frequency as well as the system impedance estimation error when measuring a 2R1C electrical circuit model. Transthoracic Electrical Bioimpedance (TEB) measurements in a healthy subject were obtained using 3M gel electrodes in a tetrapolar lateral spot electrode configuration. The obtained TEB raw signal was filtered in MATLAB to obtain the respiration and cardiogenic signals, and from the cardiogenic signal the impedance derivative signal (dZ/dt) was also calculated. The results have shown that the maximum continuous impedance estimation rate was approximately 550 measurements per second with a magnitude estimation error below 1% on 2R1C-parallel bridge measurements. The displayed respiration and cardiac signals exhibited good performance, and they could be used to obtain valuable information in some plethysmography monitoring applications. The obtained results suggest that the AD5933-based monitor could be used for the implementation of a portable and wearable Bioimpedance plethysmograph that could be used in applications such as Impedance Cardiography. These results combined with the research done in functional garments and textile electrodes might enable the implementation of PHS applications in a relatively short time from now.

  17. 3-D decoupled inversion of complex conductivity data in the real number domain

    NASA Astrophysics Data System (ADS)

    Johnson, Timothy C.; Thomle, Jonathan

    2018-01-01

    Complex conductivity imaging (also called induced polarization imaging or spectral induced polarization imaging when conducted at multiple frequencies) involves estimating the frequency-dependent complex electrical conductivity distribution of the subsurface. The superior diagnostic capabilities provided by complex conductivity spectra have driven advancements in mechanistic understanding of complex conductivity as well as modelling and inversion approaches over the past several decades. In this work, we demonstrate the theory and application for an approach to 3-D modelling and inversion of complex conductivity data in the real number domain. Beginning from first principles, we demonstrate how the equations for the real and imaginary components of the complex potential may be decoupled. This leads to a description of the real and imaginary source current terms, and a corresponding assessment of error arising from an assumption necessary to complete the decoupled modelling. We show that for most earth materials, which exhibit relatively small phases (e.g. less than 0.2 radians) in complex conductivity, these errors become insignificant. For higher phase materials, the errors may be quantified and corrected through an iterative procedure. We demonstrate the accuracy of numerical forward solutions by direct comparison to corresponding analytic solutions. We demonstrate the inversion using both synthetic and field examples with data collected over a waste infiltration trench, at frequencies ranging from 0.5 to 7.5 Hz.

  18. In-flight performance analysis of MEMS GPS receiver and its application to precise orbit determination of APOD-A satellite

    NASA Astrophysics Data System (ADS)

    Gu, Defeng; Liu, Ye; Yi, Bin; Cao, Jianfeng; Li, Xie

    2017-12-01

    An experimental satellite mission termed atmospheric density detection and precise orbit determination (APOD) was developed by China and launched on 20 September 2015. The micro-electro-mechanical system (MEMS) GPS receiver provides the basis for precise orbit determination (POD) within the range of a few decimetres. The in-flight performance of the MEMS GPS receiver was assessed. The average number of tracked GPS satellites is 10.7. However, only 5.1 GPS satellites are available for dual-frequency navigation because of the loss of many L2 observations at low elevations. The variations in the multipath error for C1 and P2 were estimated, and the maximum multipath error could reach up to 0.8 m. The average code noises are 0.28 m (C1) and 0.69 m (P2). Using the MEMS GPS receiver, the orbit of the APOD nanosatellite (APOD-A) was precisely determined. Two types of orbit solutions are proposed: a dual-frequency solution and a single-frequency solution. The antenna phase center variations (PCVs) and code residual variations (CRVs) were estimated, and the maximum value of the PCVs is 4.0 cm. After correcting the antenna PCVs and CRVs, the final orbit precision for the dual-frequency and single-frequency solutions were 7.71 cm and 12.91 cm, respectively, validated using the satellite laser ranging (SLR) data, which were significantly improved by 3.35 cm and 25.25 cm. The average RMS of the 6-h overlap differences in the dual-frequency solution between two consecutive days in three dimensions (3D) is 4.59 cm. The MEMS GPS receiver is the Chinese indigenous onboard receiver, which was successfully used in the POD of a nanosatellite. This study has important reference value for improving the MEMS GPS receiver and its application in other low Earth orbit (LEO) nanosatellites.

  19. Increasing sensitivity in the measurement of heart rate variability: the method of non-stationary RR time-frequency analysis.

    PubMed

    Melkonian, D; Korner, A; Meares, R; Bahramali, H

    2012-10-01

    A novel method of the time-frequency analysis of non-stationary heart rate variability (HRV) is developed which introduces the fragmentary spectrum as a measure that brings together the frequency content, timing and duration of HRV segments. The fragmentary spectrum is calculated by the similar basis function algorithm. This numerical tool of the time to frequency and frequency to time Fourier transformations accepts both uniform and non-uniform sampling intervals, and is applicable to signal segments of arbitrary length. Once the fragmentary spectrum is calculated, the inverse transform recovers the original signal and reveals accuracy of spectral estimates. Numerical experiments show that discontinuities at the boundaries of the succession of inter-beat intervals can cause unacceptable distortions of the spectral estimates. We have developed a measure that we call the "RR deltagram" as a form of the HRV data that minimises spectral errors. The analysis of the experimental HRV data from real-life and controlled breathing conditions suggests transient oscillatory components as functionally meaningful elements of highly complex and irregular patterns of HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Phobos laser ranging: Numerical Geodesy experiments for Martian system science

    NASA Astrophysics Data System (ADS)

    Dirkx, D.; Vermeersen, L. L. A.; Noomen, R.; Visser, P. N. A. M.

    2014-09-01

    Laser ranging is emerging as a technology for use over (inter)planetary distances, having the advantage of high (mm-cm) precision and accuracy and low mass and power consumption. We have performed numerical simulations to assess the science return in terms of geodetic observables of a hypothetical Phobos lander performing active two-way laser ranging with Earth-based stations. We focus our analysis on the estimation of Phobos and Mars gravitational, tidal and rotational parameters. We explicitly include systematic error sources in addition to uncorrelated random observation errors. This is achieved through the use of consider covariance parameters, specifically the ground station position and observation biases. Uncertainties for the consider parameters are set at 5 mm and at 1 mm for the Gaussian uncorrelated observation noise (for an observation integration time of 60 s). We perform the analysis for a mission duration up to 5 years. It is shown that a Phobos Laser Ranging (PLR) can contribute to a better understanding of the Martian system, opening the possibility for improved determination of a variety of physical parameters of Mars and Phobos. The simulations show that the mission concept is especially suited for estimating Mars tidal deformation parameters, estimating degree 2 Love numbers with absolute uncertainties at the 10-2 to 10-4 level after 1 and 4 years, respectively and providing separate estimates for the Martian quality factors at Sun and Phobos-forced frequencies. The estimation of Phobos libration amplitudes and gravity field coefficients provides an estimate of Phobos' relative equatorial and polar moments of inertia with an absolute uncertainty of 10-4 and 10-7, respectively, after 1 year. The observation of Phobos tidal deformation will be able to differentiate between a rubble pile and monolithic interior within 2 years. For all parameters, systematic errors have a much stronger influence (per unit uncertainty) than the uncorrelated Gaussian observation noise. This indicates the need for the inclusion of systematic errors in simulation studies and special attention to the mitigation of these errors in mission and system design.

  1. Association of resident fatigue and distress with perceived medical errors.

    PubMed

    West, Colin P; Tan, Angelina D; Habermann, Thomas M; Sloan, Jeff A; Shanafelt, Tait D

    2009-09-23

    Fatigue and distress have been separately shown to be associated with medical errors. The contribution of each factor when assessed simultaneously is unknown. To determine the association of fatigue and distress with self-perceived major medical errors among resident physicians using validated metrics. Prospective longitudinal cohort study of categorical and preliminary internal medicine residents at Mayo Clinic, Rochester, Minnesota. Data were provided by 380 of 430 eligible residents (88.3%). Participants began training from 2003 to 2008 and completed surveys quarterly through February 2009. Surveys included self-assessment of medical errors, linear analog self-assessment of overall quality of life (QOL) and fatigue, the Maslach Burnout Inventory, the PRIME-MD depression screening instrument, and the Epworth Sleepiness Scale. Frequency of self-perceived, self-defined major medical errors was recorded. Associations of fatigue, QOL, burnout, and symptoms of depression with a subsequently reported major medical error were determined using generalized estimating equations for repeated measures. The mean response rate to individual surveys was 67.5%. Of the 356 participants providing error data (93.7%), 139 (39%) reported making at least 1 major medical error during the study period. In univariate analyses, there was an association of subsequent self-reported error with the Epworth Sleepiness Scale score (odds ratio [OR], 1.10 per unit increase; 95% confidence interval [CI], 1.03-1.16; P = .002) and fatigue score (OR, 1.14 per unit increase; 95% CI, 1.08-1.21; P < .001). Subsequent error was also associated with burnout (ORs per 1-unit change: depersonalization OR, 1.09; 95% CI, 1.05-1.12; P < .001; emotional exhaustion OR, 1.06; 95% CI, 1.04-1.08; P < .001; lower personal accomplishment OR, 0.94; 95% CI, 0.92-0.97; P < .001), a positive depression screen (OR, 2.56; 95% CI, 1.76-3.72; P < .001), and overall QOL (OR, 0.84 per unit increase; 95% CI, 0.79-0.91; P < .001). Fatigue and distress variables remained statistically significant when modeled together with little change in the point estimates of effect. Sleepiness and distress, when modeled together, showed little change in point estimates of effect, but sleepiness no longer had a statistically significant association with errors when adjusted for burnout or depression. Among internal medicine residents, higher levels of fatigue and distress are independently associated with self-perceived medical errors.

  2. Prevalence and predictors of antibiotic prescription errors in an emergency department, Central Saudi Arabia.

    PubMed

    Alanazi, Menyfah Q; Al-Jeraisy, Majed I; Salam, Mahmoud

    2015-01-01

    Inappropriate antibiotic (ATB) prescriptions are a threat to patients, leading to adverse drug reactions, bacterial resistance, and subsequently, elevated hospital costs. Our aim was to evaluate ATB prescriptions in an emergency department of a tertiary care facility. A cross-sectional study was conducted by reviewing charts of patients complaining of infections. Patient characteristics (age, sex, weight, allergy, infection type) and prescription characteristics (class, dose, frequency, duration) were evaluated for appropriateness based on the AHFS Drug Information and the Drug Information Handbook. Descriptive and analytic statistics were applied. Sample with equal sex distribution constituted of 5,752 cases: adults (≥15 years) =61% and pediatrics (<15 years) =39%. Around 55% complained of respiratory tract infections, 25% urinary tract infections (UTIs), and 20% others. Broad-spectrum coverage ATBs were prescribed for 76% of the cases. Before the prescription, 82% of pediatrics had their weight taken, while 18% had their weight estimated. Allergy checking was done in 8% only. Prevalence of inappropriate ATB prescriptions with at least one type of error was 46.2% (pediatrics =58% and adults =39%). Errors were in ATB selection (2%), dosage (22%), frequency (4%), and duration (29%). Dosage and duration errors were significantly predominant among pediatrics (P<0.001 and P<0.0001, respectively). Selection error was higher among adults (P=0.001). Age stratification and binary logistic regression were applied. Significant predictors of inappropriate prescriptions were associated with: 1) cephalosporin prescriptions (adults: P<0.001, adjusted odds ratio [adj OR] =3.31) (pediatrics: P<0.001, adj OR =4.12) compared to penicillin; 2) UTIs (adults: P<0.001, adj OR =2.78) (pediatrics: P=0.039, adj OR =0.73) compared to respiratory tract infections; 3) obtaining weight for pediatrics before the prescription of ATB (P<0.001, adj OR =1.83) compared to those whose weight was estimated; and 4) broad-spectrum ATBs in adults (P=0.002, adj OR =0.67). Prevalence of ATB prescription errors in this emergency department was generally high and was particularly common with cephalosporin, narrow-spectrum ATBs, and UTI infections.

  3. Standardized error severity score (ESS) ratings to quantify risk associated with child restraint system (CRS) and booster seat misuse.

    PubMed

    Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley

    2017-11-17

    Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation performance data revealed several areas of misuse of the CRS/booster seat associated with high potential injury risk. Collectively, findings indicate that standardized ESS ratings are useful for estimating injury risk potential associated with real-world CRS and booster seat installation errors.

  4. Experimental Evaluation of UWB Indoor Positioning for Sport Postures

    PubMed Central

    Defraye, Jense; Steendam, Heidi; Gerlo, Joeri; De Clercq, Dirk; De Poorter, Eli

    2018-01-01

    Radio frequency (RF)-based indoor positioning systems (IPSs) use wireless technologies (including Wi-Fi, Zigbee, Bluetooth, and ultra-wide band (UWB)) to estimate the location of persons in areas where no Global Positioning System (GPS) reception is available, for example in indoor stadiums or sports halls. Of the above-mentioned forms of radio frequency (RF) technology, UWB is considered one of the most accurate approaches because it can provide positioning estimates with centimeter-level accuracy. However, it is not yet known whether UWB can also offer such accurate position estimates during strenuous dynamic activities in which moves are characterized by fast changes in direction and velocity. To answer this question, this paper investigates the capabilities of UWB indoor localization systems for tracking athletes during their complex (and most of the time unpredictable) movements. To this end, we analyze the impact of on-body tag placement locations and human movement patterns on localization accuracy and communication reliability. Moreover, two localization algorithms (particle filter and Kalman filter) with different optimizations (bias removal, non-line-of-sight (NLoS) detection, and path determination) are implemented. It is shown that although the optimal choice of optimization depends on the type of movement patterns, some of the improvements can reduce the localization error by up to 31%. Overall, depending on the selected optimization and on-body tag placement, our algorithms show good results in terms of positioning accuracy, with average errors in position estimates of 20 cm. This makes UWB a suitable approach for tracking dynamic athletic activities. PMID:29315267

  5. The inverse problem of estimating the gravitational time dilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusev, A. V., E-mail: avg@sai.msu.ru; Litvinov, D. A.; Rudenko, V. N.

    2016-11-15

    Precise testing of the gravitational time dilation effect suggests comparing the clocks at points with different gravitational potentials. Such a configuration arises when radio frequency standards are installed at orbital and ground stations. The ground-based standard is accessible directly, while the spaceborne one is accessible only via the electromagnetic signal exchange. Reconstructing the current frequency of the spaceborne standard is an ill-posed inverse problem whose solution depends significantly on the characteristics of the stochastic electromagnetic background. The solution for Gaussian noise is known, but the nature of the standards themselves is associated with nonstationary fluctuations of a wide class ofmore » distributions. A solution is proposed for a background of flicker fluctuations with a spectrum (1/f){sup γ}, where 1 < γ < 3, and stationary increments. The results include formulas for the error in reconstructing the frequency of the spaceborne standard and numerical estimates for the accuracy of measuring the relativistic redshift effect.« less

  6. Impact of number of co-existing rotors and inter-electrode distance on accuracy of rotor localization☆,☆☆

    PubMed Central

    Aronis, Konstantinos N.; Ashikaga, Hiroshi

    2018-01-01

    Background Conflicting evidence exists on the efficacy of focal impulse and rotor modulation on atrial fibrillation ablation. A potential explanation is inaccurate rotor localization from multiple rotors coexistence and a relatively large (9–11 mm) inter-electrode distance (IED) of the multi-electrode basket catheter. Methods and results We studied a numerical model of cardiac action potential to reproduce one through seven rotors in a two-dimensional lattice. We estimated rotor location using phase singularity, Shannon entropy and dominant frequency. We then spatially downsampled the time series to create IEDs of 2–30 mm. The error of rotor localization was measured with reference to the dynamics of phase singularity at the original spatial resolution (IED = 1 mm). IED has a significant impact on the error using all the methods. When only one rotor is present, the error increases exponentially as a function of IED. At the clinical IED of 10 mm, the error is 3.8 mm (phase singularity), 3.7 mm (dominant frequency), and 11.8 mm (Shannon entropy). When there are more than one rotors, the error of rotor localization increases 10-fold. The error based on the phase singularity method at the clinical IED of 10 mm ranges from 30.0 mm (two rotors) to 96.1 mm (five rotors). Conclusions The magnitude of error of rotor localization using a clinically available basket catheter, in the presence of multiple rotors might be high enough to impact the accuracy of targeting during AF ablation. Improvement of catheter design and development of high-density mapping catheters may improve clinical outcomes of FIRM-guided AF ablation. PMID:28988690

  7. Impact of number of co-existing rotors and inter-electrode distance on accuracy of rotor localization.

    PubMed

    Aronis, Konstantinos N; Ashikaga, Hiroshi

    Conflicting evidence exists on the efficacy of focal impulse and rotor modulation on atrial fibrillation ablation. A potential explanation is inaccurate rotor localization from multiple rotors coexistence and a relatively large (9-11mm) inter-electrode distance (IED) of the multi-electrode basket catheter. We studied a numerical model of cardiac action potential to reproduce one through seven rotors in a two-dimensional lattice. We estimated rotor location using phase singularity, Shannon entropy and dominant frequency. We then spatially downsampled the time series to create IEDs of 2-30mm. The error of rotor localization was measured with reference to the dynamics of phase singularity at the original spatial resolution (IED=1mm). IED has a significant impact on the error using all the methods. When only one rotor is present, the error increases exponentially as a function of IED. At the clinical IED of 10mm, the error is 3.8mm (phase singularity), 3.7mm (dominant frequency), and 11.8mm (Shannon entropy). When there are more than one rotors, the error of rotor localization increases 10-fold. The error based on the phase singularity method at the clinical IED of 10mm ranges from 30.0mm (two rotors) to 96.1mm (five rotors). The magnitude of error of rotor localization using a clinically available basket catheter, in the presence of multiple rotors might be high enough to impact the accuracy of targeting during AF ablation. Improvement of catheter design and development of high-density mapping catheters may improve clinical outcomes of FIRM-guided AF ablation. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  9. Drought Persistence in Models and Observations

    NASA Astrophysics Data System (ADS)

    Moon, Heewon; Gudmundsson, Lukas; Seneviratne, Sonia

    2017-04-01

    Many regions of the world have experienced drought events that persisted several years and caused substantial economic and ecological impacts in the 20th century. However, it remains unclear whether there are significant trends in the frequency or severity of these prolonged drought events. In particular, an important issue is linked to systematic biases in the representation of persistent drought events in climate models, which impedes analysis related to the detection and attribution of drought trends. This study assesses drought persistence errors in global climate model (GCM) simulations from the 5th phase of Coupled Model Intercomparison Project (CMIP5), in the period of 1901-2010. The model simulations are compared with five gridded observational data products. The analysis focuses on two aspects: the identification of systematic biases in the models and the partitioning of the spread of drought-persistence-error into four possible sources of uncertainty: model uncertainty, observation uncertainty, internal climate variability and the estimation error of drought persistence. We use monthly and yearly dry-to-dry transition probabilities as estimates for drought persistence with drought conditions defined as negative precipitation anomalies. For both time scales we find that most model simulations consistently underestimated drought persistence except in a few regions such as India and Eastern South America. Partitioning the spread of the drought-persistence-error shows that at the monthly time scale model uncertainty and observation uncertainty are dominant, while the contribution from internal variability does play a minor role in most cases. At the yearly scale, the spread of the drought-persistence-error is dominated by the estimation error, indicating that the partitioning is not statistically significant, due to a limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current climate models and highlight the main contributors of uncertainty of drought-persistence-error. Future analyses will focus on investigating the temporal propagation of drought persistence to better understand the causes for the identified errors in the representation of drought persistence in state-of-the-art climate models.

  10. Potential accuracy of methods of laser Doppler anemometry in the single-particle scattering mode

    NASA Astrophysics Data System (ADS)

    Sobolev, V. S.; Kashcheeva, G. A.

    2017-05-01

    Potential accuracy of methods of laser Doppler anemometry is determined for the singleparticle scattering mode where the only disturbing factor is shot noise generated by the optical signal itself. The problem is solved by means of computer simulations with the maximum likelihood method. The initial parameters of simulations are chosen to be the number of real or virtual interference fringes in the measurement volume of the anemometer, the signal discretization frequency, and some typical values of the signal/shot noise ratio. The parameters to be estimated are the Doppler frequency as the basic parameter carrying information about the process velocity, the signal amplitude containing information about the size and concentration of scattering particles, and the instant when the particles arrive at the center of the measurement volume of the anemometer, which is needed for reconstruction of the examined flow velocity as a function of time. The estimates obtained in this study show that shot noise produces a minor effect (0.004-0.04%) on the frequency determination accuracy in the entire range of chosen values of the initial parameters. For the signal amplitude and the instant when the particles arrive at the center of the measurement volume of the anemometer, the errors induced by shot noise are in the interval of 0.2-3.5%; if the number of interference fringes is sufficiently large (more than 20), the errors do not exceed 0.2% regardless of the shot noise level.

  11. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.

    2012-01-01

    Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.

  12. Estimating selected low-flow frequency statistics and harmonic-mean flows for ungaged, unregulated streams in Indiana

    USGS Publications Warehouse

    Martin, Gary R.; Fowler, Kathleen K.; Arihood, Leslie D.

    2016-09-06

    Information on low-flow characteristics of streams is essential for the management of water resources. This report provides equations for estimating the 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and the harmonic-mean flow at ungaged, unregulated stream sites in Indiana. These equations were developed using the low-flow statistics and basin characteristics for 108 continuous-record streamgages in Indiana with at least 10 years of daily mean streamflow data through the 2011 climate year (April 1 through March 31). The equations were developed in cooperation with the Indiana Department of Environmental Management.Regression techniques were used to develop the equations for estimating low-flow frequency statistics and the harmonic-mean flows on the basis of drainage-basin characteristics. A geographic information system was used to measure basin characteristics for selected streamgages. A final set of 25 basin characteristics measured at all the streamgages were evaluated to choose the best predictors of the low-flow statistics.Logistic-regression equations applicable statewide are presented for estimating the probability that selected low-flow frequency statistics equal zero. These equations use the explanatory variables total drainage area, average transmissivity of the full thickness of the unconsolidated deposits within 1,000 feet of the stream network, and latitude of the basin outlet. The percentage of the streamgage low-flow statistics correctly classified as zero or nonzero using the logistic-regression equations ranged from 86.1 to 88.9 percent.Generalized-least-squares regression equations applicable statewide for estimating nonzero low-flow frequency statistics use total drainage area, the average hydraulic conductivity of the top 70 feet of unconsolidated deposits, the slope of the basin, and the index of permeability and thickness of the Quaternary surficial sediments as explanatory variables. The average standard error of prediction of these regression equations ranges from 55.7 to 61.5 percent.Regional weighted-least-squares regression equations were developed for estimating the harmonic-mean flows by dividing the State into three low-flow regions. The Northern region uses total drainage area and the average transmissivity of the entire thickness of unconsolidated deposits as explanatory variables. The Central region uses total drainage area, the average hydraulic conductivity of the entire thickness of unconsolidated deposits, and the index of permeability and thickness of the Quaternary surficial sediments. The Southern region uses total drainage area and the percent of the basin covered by forest. The average standard error of prediction for these equations ranges from 39.3 to 66.7 percent.The regional regression equations are applicable only to stream sites with low flows unaffected by regulation and to stream sites with drainage basin characteristic values within specified limits. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features and for urbanized basins. Extrapolations near and beyond the applicable basin characteristic limits will have unknown errors that may be large. Equations are presented for use in estimating the 90-percent prediction interval of the low-flow statistics estimated by use of the regression equations at a given stream site.The regression equations are to be incorporated into the U.S. Geological Survey StreamStats Web-based application for Indiana. StreamStats allows users to select a stream site on a map and automatically measure the needed basin characteristics and compute the estimated low-flow statistics and associated prediction intervals.

  13. Density scaling on n  =  1 error field penetration in ohmically heated discharges in EAST

    NASA Astrophysics Data System (ADS)

    Wang, Hui-Hui; Sun, You-Wen; Shi, Tong-Hui; Zang, Qing; Liu, Yue-Qiang; Yang, Xu; Gu, Shuai; He, Kai-Yang; Gu, Xiang; Qian, Jin-Ping; Shen, Biao; Luo, Zheng-Ping; Chu, Nan; Jia, Man-Ni; Sheng, Zhi-Cai; Liu, Hai-Qing; Gong, Xian-Zu; Wan, Bao-Nian; Contributors, EAST

    2018-05-01

    Density scaling of error field penetration in EAST is investigated with different n  =  1 magnetic perturbation coil configurations in ohmically heated discharges. The density scalings of error field penetration thresholds under two magnetic perturbation spectra are br\\propto n_e0.5 and br\\propto n_e0.6 , where b r is the error field and n e is the line averaged electron density. One difficulty in understanding the density scaling is that key parameters other than density in determining the field penetration process may also be changed when the plasma density changes. Therefore, they should be determined from experiments. The estimated theoretical analysis (br\\propto n_e0.54 in lower density region and br\\propto n_e0.40 in higher density region), using the density dependence of viscosity diffusion time, electron temperature and mode frequency measured from the experiments, is consistent with the observed scaling. One of the key points to reproduce the observed scaling in EAST is that the viscosity diffusion time estimated from energy confinement time is almost constant. It means that the plasma confinement lies in saturation ohmic confinement regime rather than the linear Neo-Alcator regime causing weak density dependence in the previous theoretical studies.

  14. Calibration of the Advanced LIGO detectors for the discovery of the binary black-hole merger GW150914

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Ackley, K.; Adams, C.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Aggarwal, N.; Aguiar, O. D.; Ain, A.; Ajith, P.; Allen, B.; Altin, P. A.; Amariutei, D. V.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arun, K. G.; Ashton, G.; Ast, M.; Aston, S. M.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P. T.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barr, B.; Barsotti, L.; Bartlett, J.; Bartos, I.; Bassiri, R.; Batch, J. C.; Baune, C.; Behnke, B.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Biwer, C.; Blackburn, J. K.; Blair, C. D.; Blair, D.; Blair, R. M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bork, R.; Bose, S.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Brinkmann, M.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Buonanno, A.; Byer, R. L.; Cadonati, L.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Caride, S.; Caudill, S.; Cavaglià, M.; Cepeda, C.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chen, H. Y.; Chen, Y.; Cheng, C.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Countryman, S. T.; Couvares, P.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Dal Canton, T.; Danilishin, S. L.; Danzmann, K.; Darman, N. S.; Dave, I.; Daveloza, H. P.; Davies, G. S.; Daw, E. J.; DeBra, D.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Palma, I.; Dojcinoski, G.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferreira, E. C.; Fisher, R. P.; Fletcher, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gaonkar, S. G.; Gaur, G.; Gehrels, N.; George, J.; Gergely, L.; Ghosh, A.; Giaime, J. A.; Giardina, K. D.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Graef, C.; Graff, P. B.; Grant, A.; Gras, S.; Gray, C.; Green, A. C.; Grote, H.; Grunewald, S.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heintze, M. C.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jang, H.; Jani, K.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, N.; Kim, N.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kozak, D. B.; Kringel, V.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leong, J. R.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lormand, M.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meadors, G. D.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Miao, H.; Middleton, H.; Mikhailov, E. E.; Mukund, K. N.; Miller, J.; Millhouse, M.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Mohapatra, S. R. P.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nayak, R. K.; Necula, V.; Nedkova, K.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nitz, A.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pekowsky, L.; Pele, A.; Penn, S.; Pereira, R.; Perreca, A.; Phelps, M.; Pierro, V.; Pinto, I. M.; Pitkin, M.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Principe, M.; Privitera, S.; Prokhorov, L.; Puncken, O.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Raymond, V.; Read, J.; Reed, C. M.; Reid, S.; Reitze, D. H.; Rew, H.; Riles, K.; Robertson, N. A.; Robie, R.; Rollins, J. G.; Roma, V. J.; Romanov, G.; Romie, J. H.; Rowan, S.; Rüdiger, A.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sergeev, A.; Serna, G.; Sevigny, A.; Shaddock, D. A.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Szczepańczyk, M. J.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Traylor, G.; Trifirò, D.; Tse, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vander-Hyde, D. C.; van Veggel, A. A.; Vass, S.; Vaulin, R.; Vecchio, A.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Vinciguerra, S.; Vine, D. J.; Vitale, S.; Vo, T.; Vorvick, C.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Weaver, B.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Zanolin, M.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration

    2017-03-01

    In Advanced LIGO, detection and astrophysical source parameter estimation of the binary black hole merger GW150914 requires a calibrated estimate of the gravitational-wave strain sensed by the detectors. Producing an estimate from each detector's differential arm length control loop readout signals requires applying time domain filters, which are designed from a frequency domain model of the detector's gravitational-wave response. The gravitational-wave response model is determined by the detector's opto-mechanical response and the properties of its feedback control system. The measurements used to validate the model and characterize its uncertainty are derived primarily from a dedicated photon radiation pressure actuator, with cross-checks provided by optical and radio frequency references. We describe how the gravitational-wave readout signal is calibrated into equivalent gravitational-wave-induced strain and how the statistical uncertainties and systematic errors are assessed. Detector data collected over 38 calendar days, from September 12 to October 20, 2015, contain the event GW150914 and approximately 16 days of coincident data used to estimate the event false alarm probability. The calibration uncertainty is less than 10% in magnitude and 10° in phase across the relevant frequency band, 20 Hz to 1 kHz.

  15. Evaluation of a method of estimating low-flow frequencies from base-flow measurements at Indiana streams

    USGS Publications Warehouse

    Wilson, John Thomas

    2000-01-01

    A mathematical technique of estimating low-flow frequencies from base-flow measurements was evaluated by using data for streams in Indiana. Low-flow frequencies at low- flow partial-record stations were estimated by relating base-flow measurements to concurrent daily flows at nearby streamflow-gaging stations (index stations) for which low-flowfrequency curves had been developed. A network of long-term streamflow-gaging stations in Indiana provided a sample of sites with observed low-flow frequencies. Observed values of 7-day, 10-year low flow and 7-day, 2-year low flow were compared to predicted values to evaluate the accuracy of the method. Five test cases were used to evaluate the method under a variety of conditions in which the location of the index station and its drainage area varied relative to the partial-record station. A total of 141 pairs of streamflow-gaging stations were used in the five test cases. Four of the test cases used one index station, the fifth test case used two index stations. The number of base-flow measurements was varied for each test case to see if the accuracy of the method was affected by the number of measurements used. The most accurate and least variable results were produced when two index stations on the same stream or tributaries of the partial-record station were used. All but one value of the predicted 7-day, 10-year low flow were within 15 percent of the values observed for the long-term continuous record, and all of the predicted values of the 7-day, 2-year lowflow were within 15 percent of the observed values. This apparent accuracy, to some extent, may be a result of the small sample set of 15. Of the four test cases that used one index station, the most accurate and least variable results were produced in the test case where the index station and partial-record station were on the same stream or on streams tributary to each other and where the index station had a larger drainage area than the partial-record station. In that test case, the method tended to over predict, based on the median relative error. In 23 of 28 test pairs, the predicted 7-day, 10-year low flow was within 15 percent of the observed value; in 26 of 28 test pairs, the predicted 7-day, 2-year low flow was within 15 percent of the observed value. When the index station and partial-record station were on the same stream or streams tributary to each other and the index station had a smaller drainage area than the partial-record station, the method tended to under predict the low-flow frequencies. Nineteen of 28 predicted values of the 7-day, 10-year low flow were within 15 percent of the observed values. Twenty-five of 28 predicted values of the 7-day, 2-year low flow were within 15 percent of the observed values. When the index station and the partial-record station were on different streams, the method tended to under predict regardless of whether the index station had a larger or smaller drainage area than that of the partial-record station. Also, the variability of the relative error of estimate was greatest for the test cases that used index stations and partial-record stations from different streams. This variability, in part, may be caused by using more streamflow-gaging stations with small low-flow frequencies in these test cases. A small difference in the predicted and observed values can equate to a large relative error when dealing with stations that have small low-flow frequencies. In the test cases that used one index station, the method tended to predict smaller low-flow frequencies as the number of base-flow measurements was reduced from 20 to 5. Overall, the average relative error of estimate and the variability of the predicted values increased as the number of base-flow measurements was reduced.

  16. Effect of phase errors in stepped-frequency radar systems

    NASA Astrophysics Data System (ADS)

    Vanbrundt, H. E.

    1988-04-01

    Stepped-frequency waveforms are being considered for inverse synthetic aperture radar (ISAR) imaging from ship and airborne platforms and for detailed radar cross section (RCS) measurements of ships and aircraft. These waveforms make it possible to achieve resolutions of 1.0 foot by using existing radar designs and processing technology. One problem not yet fully resolved in using stepped-frequency waveform for ISAR imaging is the deterioration in signal level caused by random frequency error. Random frequency error of the stepped-frequency source results in reduced peak responses and increased null responses. The resulting reduced signal-to-noise ratio is range dependent. Two of the major concerns addressed in this report are radar range limitations for ISAR and the error in calibration for RCS measurements caused by differences in range between a passive reflector used for an RCS reference and the target to be measured. In addressing these concerns, NOSC developed an analysis to assess the tolerable frequency error in terms of resulting power loss in signal power and signal-to-phase noise.

  17. Distribution of rain height over subtropical region: Durban, South Africa for satellite communication systems

    NASA Astrophysics Data System (ADS)

    Olurotimi, E. O.; Sokoya, O.; Ojo, J. S.; Owolawi, P. A.

    2018-03-01

    Rain height is one of the significant parameters for prediction of rain attenuation for Earth-space telecommunication links, especially those operating at frequencies above 10 GHz. This study examines Three-parameter Dagum distribution of the rain height over Durban, South Africa. 5-year data were used to study the monthly, seasonal, and annual variations using the parameters estimated by the maximum likelihood of the distribution. The performance estimation of the distribution was determined using the statistical goodness of fit. Three-parameter Dagum distribution shows an appropriate distribution for the modeling of rain height over Durban with the Root Mean Square Error of 0.26. Also, the shape and scale parameters for the distribution show a wide variation. The probability exceedance of time for 0.01% indicates the high probability of rain attenuation at higher frequencies.

  18. Spatial frequency spectrum of the x-ray scatter distribution in CBCT projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J.; Verhaegen, F.; Department of Oncology, Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4

    2013-11-15

    Purpose: X-ray scatter is a source of significant image quality loss in cone-beam computed tomography (CBCT). The use of Monte Carlo (MC) simulations separating primary and scattered photons has allowed the structure and nature of the scatter distribution in CBCT to become better elucidated. This work seeks to quantify the structure and determine a suitable basis function for the scatter distribution by examining its spectral components using Fourier analysis.Methods: The scatter distribution projection data were simulated using a CBCT MC model based on the EGSnrc code. CBCT projection data, with separated primary and scatter signal, were generated for a 30.6more » cm diameter water cylinder [single angle projection with varying axis-to-detector distance (ADD) and bowtie filters] and two anthropomorphic phantoms (head and pelvis, 360 projections sampled every 1°, with and without a compensator). The Fourier transform of the resulting scatter distributions was computed and analyzed both qualitatively and quantitatively. A novel metric called the scatter frequency width (SFW) is introduced to determine the scatter distribution's frequency content. The frequency content results are used to determine a set basis functions, consisting of low-frequency sine and cosine functions, to fit and denoise the scatter distribution generated from MC simulations using a reduced number of photons and projections. The signal recovery is implemented using Fourier filtering (low-pass Butterworth filter) and interpolation. Estimates of the scatter distribution are used to correct and reconstruct simulated projections.Results: The spatial and angular frequencies are contained within a maximum frequency of 0.1 cm{sup −1} and 7/(2π) rad{sup −1} for the imaging scenarios examined, with these values varying depending on the object and imaging setup (e.g., ADD and compensator). These data indicate spatial and angular sampling every 5 cm and π/7 rad (∼25°) can be used to properly capture the scatter distribution, with reduced sampling possible depending on the imaging scenario. Using a low-pass Butterworth filter, tuned with the SFW values, to denoise the scatter projection data generated from MC simulations using 10{sup 6} photons resulted in an error reduction of greater than 85% for the estimating scatter in single and multiple projections. Analysis showed that the use of a compensator helped reduce the error in estimating the scatter distribution from limited photon simulations by more than 37% when compared to the case without a compensator for the head and pelvis phantoms. Reconstructions of simulated head phantom projections corrected by the filtered and interpolated scatter estimates showed improvements in overall image quality.Conclusions: The spatial frequency content of the scatter distribution in CBCT is found to be contained within the low frequency domain. The frequency content is modulated both by object and imaging parameters (ADD and compensator). The low-frequency nature of the scatter distribution allows for a limited set of sine and cosine basis functions to be used to accurately represent the scatter signal in the presence of noise and reduced data sampling decreasing MC based scatter estimation time. Compensator induced modulation of the scatter distribution reduces the frequency content and improves the fitting results.« less

  19. Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.

    PubMed

    Wang, Yibin; Nedelman, Jerry

    2002-04-01

    To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.

  20. Headaches associated with refractive errors: myth or reality?

    PubMed

    Gil-Gouveia, R; Martins, I P

    2002-04-01

    Headache and refractive errors are very common conditions in the general population, and those with headache often attribute their pain to a visual problem. The International Headache Society (IHS) criteria for the classification of headache includes an entity of headache associated with refractive errors (HARE), but indicates that its importance is widely overestimated. To compare overall headache frequency and HARE frequency in healthy subjects with uncorrected or miscorrected refractive errors and a control group. We interviewed 105 individuals with uncorrected refractive errors and a control group of 71 subjects (with properly corrected or without refractive errors) regarding their headache history. We compared the occurrence of headache and its diagnosis in both groups and assessed its relation to their habits of visual effort and type of refractive errors. Headache frequency was similar in both subjects and controls. Headache associated with refractive errors was the only headache type significantly more common in subjects with refractive errors than in controls (6.7% versus 0%). It was associated with hyperopia and was unrelated to visual effort or to the severity of visual error. With adequate correction, 72.5% of the subjects with headache and refractive error reported improvement in their headaches, and 38% had complete remission of headache. Regardless of the type of headache present, headache frequency was significantly reduced in these subjects (t = 2.34, P =.02). Headache associated with refractive errors was rarely identified in individuals with refractive errors. In those with chronic headache, proper correction of refractive errors significantly improved headache complaints and did so primarily by decreasing the frequency of headache episodes.

  1. Quantifying Uncertainty in Instantaneous Orbital Data Products of TRMM over Indian Subcontinent

    NASA Astrophysics Data System (ADS)

    Jayaluxmi, I.; Nagesh, D.

    2013-12-01

    In the last 20 years, microwave radiometers have taken satellite images of earth's weather proving to be a valuable tool for quantitative estimation of precipitation from space. However, along with the widespread acceptance of microwave based precipitation products, it has also been recognized that they contain large uncertainties. While most of the uncertainty evaluation studies focus on the accuracy of rainfall accumulated over time (e.g., season/year), evaluation of instantaneous rainfall intensities from satellite orbital data products are relatively rare. These instantaneous products are known to potentially cause large uncertainties during real time flood forecasting studies at the watershed scale. Especially over land regions, where the highly varying land surface emissivity offer a myriad of complications hindering accurate rainfall estimation. The error components of orbital data products also tend to interact nonlinearly with hydrologic modeling uncertainty. Keeping these in mind, the present study fosters the development of uncertainty analysis using instantaneous satellite orbital data products (version 7 of 1B11, 2A25, 2A23) derived from the passive and active sensors onboard Tropical Rainfall Measuring Mission (TRMM) satellite, namely TRMM microwave imager (TMI) and Precipitation Radar (PR). The study utilizes 11 years of orbital data from 2002 to 2012 over the Indian subcontinent and examines the influence of various error sources on the convective and stratiform precipitation types. Analysis conducted over the land regions of India investigates three sources of uncertainty in detail. These include 1) Errors due to improper delineation of rainfall signature within microwave footprint (rain/no rain classification), 2) Uncertainty offered by the transfer function linking rainfall with TMI low frequency channels and 3) Sampling errors owing to the narrow swath and infrequent visits of TRMM sensors. Case study results obtained during the Indian summer monsoon months of June-September are presented using contingency table statistics, performance diagram, scatter plots and probability density functions. Our study demonstrates that theory of copula can be efficiently used to represent the highly non linear dependency structure of rainfall with respect to TMI low frequency channels of 19, 21, 37 GHz. This questions the exclusive usage of high frequency 85 GHz channel for TMI overland rainfall retrieval algorithms. Further, the PR sampling errors revealed using a statistical bootstrap technique was found to incur relative sampling errors <30% (for 2 degree grids) over India whose magnitudes were biased towards stratiform rainfall type and sampling technique employed. These findings clearly document that proper characterization of error structure offered by TMI and PR has wider implications for decision making prior to incorporating the resulting orbital products for basin scale hydrologic modeling.

  2. Frequency-scanning interferometry using a time-varying Kalman filter for dynamic tracking measurements.

    PubMed

    Jia, Xingyu; Liu, Zhigang; Tao, Long; Deng, Zhongwen

    2017-10-16

    Frequency scanning interferometry (FSI) with a single external cavity diode laser (ECDL) and time-invariant Kalman filtering is an effective technique for measuring the distance of a dynamic target. However, due to the hysteresis of the piezoelectric ceramic transducer (PZT) actuator in the ECDL, the optical frequency sweeps of the ECDL exhibit different behaviors, depending on whether the frequency is increasing or decreasing. Consequently, the model parameters of Kalman filter appear time varying in each iteration, which produces state estimation errors with time-invariant filtering. To address this, in this paper, a time-varying Kalman filter is proposed to model the instantaneous movement of a target relative to the different optical frequency tuning durations of the ECDL. The combination of the FSI method with the time-varying Kalman filter was theoretically analyzed, and the simulation and experimental results show the proposed method greatly improves the performance of dynamic FSI measurements.

  3. Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.

    NASA Astrophysics Data System (ADS)

    Wang, Avery Li-Chun

    This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters which require a small fraction of the computational power of conventional FIR implementations. This design strategy is based on truncated and stabilized IIR filters. These signal-processing methods have been applied to the problem of auditory source separation, resulting in voice separation from complex music that is significantly better than previous results at far lower computational cost.

  4. The Application of GIM in Precise Orbit Determination for LEO Satellites with Single-Frequency GPS Measurements

    NASA Astrophysics Data System (ADS)

    Peng, Dong-ju; Wu, Bin

    2012-10-01

    With the precise GPS ephemeris and clock error available, the iono- spheric delay is left as the dominant error source in the single-frequency GPS data. Thus, the removal of ionospheric effects is a ma jor prerequisite for an improved orbit reconstruction of LEO satellites based on the single-frequency GPS data. In this paper, the use of Global Ionospheric Maps (GIM) in kine- matic and dynamic orbit determinations for LEO satellites with single-frequency GPS pseudorange measurements is discussed first, and then, estimating the iono- spheric scale factor to remove the ionospheric effects from the C/A-code pseu- dorange measurements for both kinematic and dynamic orbit determinations is addressed. As it is known that the ionospheric delay of space-borne GPS sig- nals is strongly dependent on the orbit altitudes of LEO satellites, we select the real C/A-code pseudorange measurement data of the CHAMP, GRACE, TerraSAR-X and SAC-C satellites with altitudes between 300 km and 800 km as sample data in this paper. It is demonstrated that the approach to eliminating ionospheric effects in C/A-code pseudorange measurements by estimating the ionospheric scale factor is highly effective. Employing this approach, the accu- racy of both kinematic and dynamic orbits can be improved notably. Among those five LEO satellites, CHAMP with the lowest orbit altitude has the most remarkable improvements in orbit accuracy, which are 55.6% and 47.6% for kine- matic and dynamic orbits, respectively. SAC-C with the highest orbit altitude has the least improvements in orbit accuracy accordingly, which are 47.8% and 38.2%, respectively.

  5. Modeling Pharmacological Clock and Memory Patterns of Interval Timing in a Striatal Beat-Frequency Model with Realistic, Noisy Neurons

    PubMed Central

    Oprisan, Sorinel A.; Buhusi, Catalin V.

    2011-01-01

    In most species, the capability of perceiving and using the passage of time in the seconds-to-minutes range (interval timing) is not only accurate but also scalar: errors in time estimation are linearly related to the estimated duration. The ubiquity of scalar timing extends over behavioral, lesion, and pharmacological manipulations. For example, in mammals, dopaminergic drugs induce an immediate, scalar change in the perceived time (clock pattern), whereas cholinergic drugs induce a gradual, scalar change in perceived time (memory pattern). How do these properties emerge from unreliable, noisy neurons firing in the milliseconds range? Neurobiological information relative to the brain circuits involved in interval timing provide support for an striatal beat frequency (SBF) model, in which time is coded by the coincidental activation of striatal spiny neurons by cortical neural oscillators. While biologically plausible, the impracticality of perfect oscillators, or their lack thereof, questions this mechanism in a brain with noisy neurons. We explored the computational mechanisms required for the clock and memory patterns in an SBF model with biophysically realistic and noisy Morris–Lecar neurons (SBF–ML). Under the assumption that dopaminergic drugs modulate the firing frequency of cortical oscillators, and that cholinergic drugs modulate the memory representation of the criterion time, we show that our SBF–ML model can reproduce the pharmacological clock and memory patterns observed in the literature. Numerical results also indicate that parameter variability (noise) – which is ubiquitous in the form of small fluctuations in the intrinsic frequencies of neural oscillators within and between trials, and in the errors in recording/retrieving stored information related to criterion time – seems to be critical for the time-scale invariance of the clock and memory patterns. PMID:21977014

  6. Significantly Improving Regional Seismic Amplitude Tomography at Higher Frequencies by Determining S -Wave Bandwidth

    DOE PAGES

    Fisk, Mark D.; Pasyanos, Michael E.

    2016-05-03

    Characterizing regional seismic signals continues to be a difficult problem due to their variability. Calibration of these signals is very important to many aspects of monitoring underground nuclear explosions, including detecting seismic signals, discriminating explosions from earthquakes, and reliably estimating magnitude and yield. Amplitude tomography, which simultaneously inverts for source, propagation, and site effects, is a leading method of calibrating these signals. A major issue in amplitude tomography is the data quality of the input amplitude measurements. Pre-event and prephase signal-to-noise ratio (SNR) tests are typically used but can frequently include bad signals and exclude good signals. The deficiencies ofmore » SNR criteria, which are demonstrated here, lead to large calibration errors. To ameliorate these issues, we introduce a semi-automated approach to assess the bandwidth of a spectrum where it behaves physically. We determine the maximum frequency (denoted as F max) where it deviates from this behavior due to inflections at which noise or spurious signals start to bias the spectra away from the expected decay. We compare two amplitude tomography runs using the SNR and new F max criteria and show significant improvements to the stability and accuracy of the tomography output for frequency bands higher than 2 Hz by using our assessments of valid S-wave bandwidth. We compare Q estimates, P/S residuals, and some detailed results to explain the improvements. Lastly, for frequency bands higher than 4 Hz, needed for effective P/S discrimination of explosions from earthquakes, the new bandwidth criteria sufficiently fix the instabilities and errors so that the residuals and calibration terms are useful for application.« less

  7. An automatic frequency control loop using overlapping DFTs (Discrete Fourier Transforms)

    NASA Technical Reports Server (NTRS)

    Aguirre, S.

    1988-01-01

    An automatic frequency control (AFC) loop is introduced and analyzed in detail. The new scheme is a generalization of the well known Cross Product AFC loop that uses running overlapping discrete Fourier transforms (DFTs) to create a discriminator curve. Linear analysis is included and supported with computer simulations. The algorithm is tested in a low carrier to noise ratio (CNR) dynamic environment, and the probability of loss of lock is estimated via computer simulations. The algorithm discussed is a suboptimum tracking scheme with a larger frequency error variance compared to an optimum strategy, but offers simplicity of implementation and a very low operating threshold CNR. This technique can be applied during the carrier acquisition and re-acquisition process in the Advanced Receiver.

  8. Mesospheric radar wind comparisons at high and middle southern latitudes

    NASA Astrophysics Data System (ADS)

    Reid, Iain M.; McIntosh, Daniel L.; Murphy, Damian J.; Vincent, Robert A.

    2018-05-01

    We compare hourly averaged neutral winds derived from two meteor radars operating at 33.2 and 55 MHz to estimate the errors in these measurements. We then compare the meteor radar winds with those from a medium-frequency partial reflection radar operating at 1.94 MHz. These three radars are located at Davis Station, Antarctica. We then consider a middle-latitude 55 MHz meteor radar wind comparison with a 1.98 MHz medium-frequency partial reflection radar to determine how representative the Davis results are. At both sites, the medium-frequency radar winds are clearly underestimated, and the underestimation increases from 80 km to the maximum height of 98 km. Correction factors are suggested for these results.[Figure not available: see fulltext.

  9. An adaptive filter method for spacecraft using gravity assist

    NASA Astrophysics Data System (ADS)

    Ning, Xiaolin; Huang, Panpan; Fang, Jiancheng; Liu, Gang; Ge, Shuzhi Sam

    2015-04-01

    Celestial navigation (CeleNav) has been successfully used during gravity assist (GA) flyby for orbit determination in many deep space missions. Due to spacecraft attitude errors, ephemeris errors, the camera center-finding bias, and the frequency of the images before and after the GA flyby, the statistics of measurement noise cannot be accurately determined, and yet have time-varying characteristics, which may introduce large estimation error and even cause filter divergence. In this paper, an unscented Kalman filter (UKF) with adaptive measurement noise covariance, called ARUKF, is proposed to deal with this problem. ARUKF scales the measurement noise covariance according to the changes in innovation and residual sequences. Simulations demonstrate that ARUKF is robust to the inaccurate initial measurement noise covariance matrix and time-varying measurement noise. The impact factors in the ARUKF are also investigated.

  10. Direct measurement of the poliovirus RNA polymerase error frequency in vitro

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B.

    1988-02-01

    The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. Amore » fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.« less

  11. Full-order observer for direct torque control of induction motor based on constant V/F control technique.

    PubMed

    Pimkumwong, Narongrit; Wang, Ming-Shyan

    2018-02-01

    This paper presents another control method for the three-phase induction motor that is direct torque control based on constant voltage per frequency control technique. This method uses the magnitude of stator flux and torque errors to generate the stator voltage and phase angle references for controlling the induction motor by using constant voltage per frequency control method. Instead of hysteresis comparators and optimum switching table, the PI controllers and space vector modulation technique are used to reduce torque and stator-flux ripples and achieve constant switching frequency. Moreover, the coordinate transformations are not required. To implement this control method, a full-order observer is used to estimate stator flux and overcome the problems from drift and saturation in using pure integrator. The feedback gains are designed by simple manner to improve the convergence of stator flux estimation, especially in low speed range. Furthermore, the necessary conditions to maintain the stability for feedback gain design are introduced. The simulation and experimental results show accurate and stable operation of the introduced estimator and good dynamic response of the proposed control method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Error Type and Lexical Frequency Effects: Error Detection in Swedish Children with Language Impairment

    ERIC Educational Resources Information Center

    Hallin, Anna Eva; Reuterskiöld, Christina

    2017-01-01

    Purpose: The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies. Method:…

  13. Gravity gradient preprocessing at the GOCE HPF

    NASA Astrophysics Data System (ADS)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  14. Preprocessing of gravity gradients at the GOCE high-level processing facility

    NASA Astrophysics Data System (ADS)

    Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin

    2009-07-01

    One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  15. Preliminary Results of a Magnetotelluric Survey in the Center of Hawaii Island

    NASA Astrophysics Data System (ADS)

    Lienert, B. R.; Thomas, D. M.; Wallin, E.

    2014-12-01

    From 2013 up to the present we have been recording magnetotelluric (MT) data at 25 sites in a 35x25 km region (elev. 1943 m) on the saddle between the active volcano of Mauna Loa (4169 m) and the dormant volcano of Mauna Kea (4205 m) on Hawai'i Island. The MT data, particularly the electric fields, are frequently contaminated by spurious components that are not due to the plane-wave magnetic signals required for derivation of the MT impedance tensor. We therefore developed interactive graphical software (MTPlot) to plot and analyze the MT signals in the field. MTPlot allows us to quickly examine records in both the time and frequency domain to in order to judge their quality. It also transforms the data into estimates of apparent resistivity and their error in the frequency range 0.001-500 Hz. This has proved very useful for selecting suitable records for subsequent analysis. We then use multi-taper remote reference processing to obtain our final apparent resistivity estimates and their errors. We present preliminary results of one and two dimensional modeling of these estimates to obtain the three-dimensional distribution of subsurface resistivities down to depths of 5 km. The results are compared to temperatures and properties of cores obtained when we drilled a research hole to a depth of 1760 m in this same region. We shall discuss how our results relate to the extent of the fresh-water and geothermal energy reservoirs that we discovered during drilling.

  16. How well can we measure the vertical wind speed? Implications for fluxes of energy and mass

    Treesearch

    John Kochendorfer; Tilden P. Meyers; John Frank; William J. Massman; Mark W. Heuer

    2012-01-01

    Sonic anemometers are capable of measuring the wind speed in all three dimensions at high frequencies (10­50 Hz), and are relied upon to estimate eddy-covariance-based fluxes of mass and energy over a wide variety of surfaces and ecosystems. In this study, wind-velocity measurement errors from a three-dimensional sonic anemometer with a nonorthogonal transducer...

  17. Application of an assessment protocol to extensive species and total basal area per acre datasets for the eastern coterminous United States

    Treesearch

    Rachel Riemann; Ty Wilson; Andrew Lister

    2012-01-01

    We recently developed an assessment protocol that provides information on the magnitude, location, frequency and type of error in geospatial datasets of continuous variables (Riemann et al. 2010). The protocol consists of a suite of assessment metrics which include an examination of data distributions and areas estimates, at several scales, examining each in the form...

  18. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

  19. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    PubMed

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  20. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    PubMed Central

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-01-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086

  1. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    NASA Astrophysics Data System (ADS)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  2. Using transfer functions to quantify El Niño Southern Oscillation dynamics in data and models.

    PubMed

    MacMartin, Douglas G; Tziperman, Eli

    2014-09-08

    Transfer function tools commonly used in engineering control analysis can be used to better understand the dynamics of El Niño Southern Oscillation (ENSO), compare data with models and identify systematic model errors. The transfer function describes the frequency-dependent input-output relationship between any pair of causally related variables, and can be estimated from time series. This can be used first to assess whether the underlying relationship is or is not frequency dependent, and if so, to diagnose the underlying differential equations that relate the variables, and hence describe the dynamics of individual subsystem processes relevant to ENSO. Estimating process parameters allows the identification of compensating model errors that may lead to a seemingly realistic simulation in spite of incorrect model physics. This tool is applied here to the TAO array ocean data, the GFDL-CM2.1 and CCSM4 general circulation models, and to the Cane-Zebiak ENSO model. The delayed oscillator description is used to motivate a few relevant processes involved in the dynamics, although any other ENSO mechanism could be used instead. We identify several differences in the processes between the models and data that may be useful for model improvement. The transfer function methodology is also useful in understanding the dynamics and evaluating models of other climate processes.

  3. Selective DNA Pooling for Determination of Linkage between a Molecular Marker and a Quantitative Trait Locus

    PubMed Central

    Darvasi, A.; Soller, M.

    1994-01-01

    Selective genotyping is a method to reduce costs in marker-quantitative trait locus (QTL) linkage determination by genotyping only those individuals with extreme, and hence most informative, quantitative trait values. The DNA pooling strategy (termed: ``selective DNA pooling'') takes this one step further by pooling DNA from the selected individuals at each of the two phenotypic extremes, and basing the test for linkage on marker allele frequencies as estimated from the pooled samples only. This can reduce genotyping costs of marker-QTL linkage determination by up to two orders of magnitude. Theoretical analysis of selective DNA pooling shows that for experiments involving backcross, F(2) and half-sib designs, the power of selective DNA pooling for detecting genes with large effect, can be the same as that obtained by individual selective genotyping. Power for detecting genes with small effect, however, was found to decrease strongly with increase in the technical error of estimating allele frequencies in the pooled samples. The effect of technical error, however, can be markedly reduced by replication of technical procedures. It is also shown that a proportion selected of 0.1 at each tail will be appropriate for a wide range of experimental conditions. PMID:7896115

  4. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  5. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.

  6. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  7. Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements

    NASA Astrophysics Data System (ADS)

    Jakub, Thomas D.

    Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.

  8. Factors associated with reporting of medication errors by Israeli nurses.

    PubMed

    Kagan, Ilya; Barnoy, Sivia

    2008-01-01

    This study investigated medication error reporting among Israeli nurses, the relationship between nurses' personal views about error reporting, and the impact of the safety culture of the ward and hospital on this reporting. Nurses (n = 201) completed a questionnaire related to different aspects of error reporting (frequency, organizational norms of dealing with errors, and personal views on reporting). The higher the error frequency, the more errors went unreported. If the ward nurse manager corrected errors on the ward, error self-reporting decreased significantly. Ward nurse managers have to provide good role models.

  9. Analysis of frequency mixing error on heterodyne interferometric ellipsometry

    NASA Astrophysics Data System (ADS)

    Deng, Yuan-long; Li, Xue-jin; Wu, Yu-bin; Hu, Ju-guang; Yao, Jian-quan

    2007-11-01

    A heterodyne interferometric ellipsometer, with no moving parts and a transverse Zeeman laser, is demonstrated. The modified Mach-Zehnder interferometer characterized as a separate frequency and common-path configuration is designed and theoretically analyzed. The experimental data show a fluctuation mainly resulting from the frequency mixing error which is caused by the imperfection of polarizing beam splitters (PBS), the elliptical polarization and non-orthogonality of light beams. The producing mechanism of the frequency mixing error and its influence on measurement are analyzed with the Jones matrix method; the calculation indicates that it results in an error up to several nanometres in the thickness measurement of thin films. The non-orthogonality has no contribution to the phase difference error when it is relatively small; the elliptical polarization and the imperfection of PBS have a major effect on the error.

  10. Integrating chronological uncertainties for annually laminated lake sediments using layer counting, independent chronologies and Bayesian age modelling (Lake Ohau, South Island, New Zealand)

    NASA Astrophysics Data System (ADS)

    Vandergoes, Marcus J.; Howarth, Jamie D.; Dunbar, Gavin B.; Turnbull, Jocelyn C.; Roop, Heidi A.; Levy, Richard H.; Li, Xun; Prior, Christine; Norris, Margaret; Keller, Liz D.; Baisden, W. Troy; Ditchburn, Robert; Fitzsimons, Sean J.; Bronk Ramsey, Christopher

    2018-05-01

    Annually resolved (varved) lake sequences are important palaeoenvironmental archives as they offer a direct incremental dating technique for high-frequency reconstruction of environmental and climate change. Despite the importance of these records, establishing a robust chronology and quantifying its precision and accuracy (estimations of error) remains an essential but challenging component of their development. We outline an approach for building reliable independent chronologies, testing the accuracy of layer counts and integrating all chronological uncertainties to provide quantitative age and error estimates for varved lake sequences. The approach incorporates (1) layer counts and estimates of counting precision; (2) radiometric and biostratigrapic dating techniques to derive independent chronology; and (3) the application of Bayesian age modelling to produce an integrated age model. This approach is applied to a case study of an annually resolved sediment record from Lake Ohau, New Zealand. The most robust age model provides an average error of 72 years across the whole depth range. This represents a fractional uncertainty of ∼5%, higher than the <3% quoted for most published varve records. However, the age model and reported uncertainty represent the best fit between layer counts and independent chronology and the uncertainties account for both layer counting precision and the chronological accuracy of the layer counts. This integrated approach provides a more representative estimate of age uncertainty and therefore represents a statistically more robust chronology.

  11. Adjusted peak-flow frequency estimates for selected streamflow-gaging stations in or near Montana based on data through water year 2011: Chapter D in Montana StreamStats

    USGS Publications Warehouse

    Sando, Steven K.; Sando, Roy; McCarthy, Peter M.; Dutton, DeAnn M.

    2016-04-05

    The climatic conditions of the specific time period during which peak-flow data were collected at a given streamflow-gaging station (hereinafter referred to as gaging station) can substantially affect how well the peak-flow frequency (hereinafter referred to as frequency) results represent long-term hydrologic conditions. Differences in the timing of the periods of record can result in substantial inconsistencies in frequency estimates for hydrologically similar gaging stations. Potential for inconsistency increases with decreasing peak-flow record length. The representativeness of the frequency estimates for a short-term gaging station can be adjusted by various methods including weighting the at-site results in association with frequency estimates from regional regression equations (RREs) by using the Weighted Independent Estimates (WIE) program. Also, for gaging stations that cannot be adjusted by using the WIE program because of regulation or drainage areas too large for application of RREs, frequency estimates might be improved by using record extension procedures, including a mixed-station analysis using the maintenance of variance type I (MOVE.1) procedure. The U.S. Geological Survey, in cooperation with the Montana Department of Transportation and the Montana Department of Natural Resources and Conservation, completed a study to provide adjusted frequency estimates for selected gaging stations through water year 2011.The purpose of Chapter D of this Scientific Investigations Report is to present adjusted frequency estimates for 504 selected streamflow-gaging stations in or near Montana based on data through water year 2011. Estimates of peak-flow magnitudes for the 66.7-, 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities are reported. These annual exceedance probabilities correspond to the 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.The at-site frequency estimates were adjusted by weighting with frequency estimates from RREs using the WIE program for 438 selected gaging stations in Montana. These 438 selected gaging stations (1) had periods of record less than or equal to 40 years, (2) represented unregulated or minor regulation conditions, and (3) had drainage areas less than about 2,750 square miles.The weighted-average frequency estimates obtained by weighting with RREs generally are considered to provide improved frequency estimates. In some cases, there are substantial differences among the at-site frequency estimates, the regression-equation frequency estimates, and the weighted-average frequency estimates. In these cases, thoughtful consideration should be applied when selecting the appropriate frequency estimate. Some factors that might be considered when selecting the appropriate frequency estimate include (1) whether the specific gaging station has peak-flow characteristics that distinguish it from most other gaging stations used in developing the RREs for the hydrologic region; and (2) the length of the peak-flow record and the general climatic characteristics during the period when the peak-flow data were collected. For critical structure-design applications, a conservative approach would be to select the higher of the at-site frequency estimate and the weighted-average frequency estimate.The mixed-station MOVE.1 procedure generally was applied in cases where three or more gaging stations were located on the same large river and some of the gaging stations could not be adjusted using the weighted-average method because of regulation or drainage areas too large for application of RREs. The mixed-station MOVE.1 procedure was applied to 66 selected gaging stations on 19 large rivers.The general approach for using mixed-station record extension procedures to adjust at-site frequencies involved (1) determining appropriate base periods for the gaging stations on the large rivers, (2) synthesizing peak-flow data for the gaging stations with incomplete peak-flow records during the base periods by using the mixed-station MOVE.1 procedure, and (3) conducting frequency analysis on the combined recorded and synthesized peak-flow data for each gaging station. Frequency estimates for the combined recorded and synthesized datasets for 66 gaging stations with incomplete peak-flow records during the base periods are presented. The uncertainties in the mixed-station record extension results are difficult to directly quantify; thus, it is important to understand the intended use of the estimated frequencies based on analysis of the combined recorded and synthesized datasets. The estimated frequencies are considered general estimates of frequency relations among gaging stations on the same stream channel that might be expected if the gaging stations had been gaged during the same long-term base period. However, because the mixed-station record extension procedures involve secondary statistical analysis with accompanying errors, the uncertainty of the frequency estimates is larger than would be obtained by collecting systematic records for the same number of years in the base period.

  12. Effect of inter-tissue inductive coupling on multi-frequency imaging of intracranial hemorrhage by magnetic induction tomography

    NASA Astrophysics Data System (ADS)

    Xiao, Zhili; Tan, Chao; Dong, Feng

    2017-08-01

    Magnetic induction tomography (MIT) is a promising technique for continuous monitoring of intracranial hemorrhage due to its contactless nature, low cost and capacity to penetrate the high-resistivity skull. The inter-tissue inductive coupling increases with frequency, which may lead to errors in multi-frequency imaging at high frequency. The effect of inter-tissue inductive coupling was investigated to improve the multi-frequency imaging of hemorrhage. An analytical model of inter-tissue inductive coupling based on the equivalent circuit was established. A set of new multi-frequency decomposition equations separating the phase shift of hemorrhage from other brain tissues was derived by employing the coupling information to improve the multi-frequency imaging of intracranial hemorrhage. The decomposition error and imaging error are both decreased after considering the inter-tissue inductive coupling information. The study reveals that the introduction of inter-tissue inductive coupling can reduce the errors of multi-frequency imaging, promoting the development of intracranial hemorrhage monitoring by multi-frequency MIT.

  13. Robust logistic regression to narrow down the winner's curse for rare and recessive susceptibility variants.

    PubMed

    Kesselmeier, Miriam; Lorenzo Bermejo, Justo

    2017-11-01

    Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package 'robustbase' with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation controlled type I error rates. Standard logistic regression showed the highest power but standard GRR estimates also showed the largest bias and MSE, in particular for associated rare and recessive variants. For illustration, a recessive variant with a true GRR=6.32 and a minor allele frequency=0.05 investigated in a 1000 case/1000 control study by standard logistic regression resulted in power=0.60 and MSE=16.5. The corresponding figures for Huber-based estimation were power=0.51 and MSE=0.53. Overall, Hampel- and Huber-based GRR estimates did not differ much. Robust logistic regression may represent a valuable alternative to standard maximum likelihood estimation when the focus lies on risk prediction rather than identification of susceptibility variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Pulsed photoacoustic flow imaging with a handheld system

    NASA Astrophysics Data System (ADS)

    van den Berg, Pim J.; Daoudi, Khalid; Steenbergen, Wiendelt

    2016-02-01

    Flow imaging is an important technique in a range of disease areas, but estimating low flow speeds, especially near the walls of blood vessels, remains challenging. Pulsed photoacoustic flow imaging can be an alternative since there is little signal contamination from background tissue with photoacoustic imaging. We propose flow imaging using a clinical photoacoustic system that is both handheld and portable. The system integrates a linear array with 7.5 MHz central frequency in combination with a high-repetition-rate diode laser to allow high-speed photoacoustic imaging-ideal for this application. This work shows the flow imaging performance of the system in vitro using microparticles. Both two-dimensional (2-D) flow images and quantitative flow velocities from 12 to 75 mm/s were obtained. In a transparent bulk medium, flow estimation showed standard errors of ˜7% the estimated speed; in the presence of tissue-realistic optical scattering, the error increased to 40% due to limited signal-to-noise ratio. In the future, photoacoustic flow imaging can potentially be performed in vivo using fluorophore-filled vesicles or with an improved setup on whole blood.

  15. Effect of temporal sampling and timing for soil moisture measurements at field scale

    NASA Astrophysics Data System (ADS)

    Snapir, B.; Hobbs, S.

    2012-04-01

    Estimating soil moisture at field scale is valuable for various applications such as irrigation scheduling in cultivated watersheds, flood and drought prediction, waterborne disease spread assessment, or even determination of mobility with lightweight vehicles. Synthetic aperture radar on satellites in low Earth orbit can provide fine resolution images with a repeat time of a few days. For an Earth observing satellite, the choice of the orbit is driven in particular by the frequency of measurements required to meet a certain accuracy in retrieving the parameters of interest. For a given target, having only one image every week may not enable to capture the full dynamic range of soil moisture - soil moisture can change significantly within a day when rainfall occurs. Hence this study focuses on the effect of temporal sampling and timing of measurements in terms of error on the retrieved signal. All the analyses are based on in situ measurements of soil moisture (acquired every 30 min) from the OzNet Hydrological Monitoring Network in Australia for different fields over several years. The first study concerns sampling frequency. Measurements at different frequencies were simulated by sub-sampling the original data. Linear interpolation was used to estimate the missing intermediate values, and then this time series was compared to the original. The difference between these two signals is computed for different levels of sub-sampling. Results show that the error increases linearly when the interval is less than 1 day. For intervals longer than a day, a sinusoidal component appears on top of the linear growth due to the diurnal variation of surface soil moisture. Thus, for example, the error with measurements every 4.5 days can be slightly less than the error with measurements every 2 days. Next, for a given sampling interval, this study evaluated the effect of the time during the day at which measurements are made. Of course when measurements are very frequent the time of acquisition does not matter, but when few measurements are available (sampling interval > 1 day), the time of acquisition can be important. It is shown that with daily measurements the error can double depending on the time of acquisition. This result is very sensitive to the phase of the sinusoidal variation of soil moisture. For example, in autumn for a given field with soil moisture ranging from 7.08% to 11.44% (mean and standard deviation being respectively 8.68% and 0.74%), daily measurements at 2 pm lead to a mean error of 0.47% v/v, while daily measurements at 9 am/pm produce a mean error of 0.24% v/v. The minimum of the sinusoid occurs every afternoon around 2 pm, after interpolation, measurements acquired at this time underestimate soil moisture, whereas measurements around 9 am/pm correspond to nodes of the sinusoid, hence they represent the average soil moisture. These results concerning the frequency and the timing of measurements can potentially drive the schedule of satellite image acquisition over some fields.

  16. Investigating the performance of reconstruction methods used in structured illumination microscopy as a function of the illumination pattern's modulation frequency

    NASA Astrophysics Data System (ADS)

    Shabani, H.; Sánchez-Ortiga, E.; Preza, C.

    2016-03-01

    Surpassing the resolution of optical microscopy defined by the Abbe diffraction limit, while simultaneously achieving optical sectioning, is a challenging problem particularly for live cell imaging of thick samples. Among a few developing techniques, structured illumination microscopy (SIM) addresses this challenge by imposing higher frequency information into the observable frequency band confined by the optical transfer function (OTF) of a conventional microscope either doubling the spatial resolution or filling the missing cone based on the spatial frequency of the pattern when the patterned illumination is two-dimensional. Standard reconstruction methods for SIM decompose the low and high frequency components from the recorded low-resolution images and then combine them to reach a high-resolution image. In contrast, model-based approaches rely on iterative optimization approaches to minimize the error between estimated and forward images. In this paper, we study the performance of both groups of methods by simulating fluorescence microscopy images from different type of objects (ranging from simulated two-point sources to extended objects). These simulations are used to investigate the methods' effectiveness on restoring objects with various types of power spectrum when modulation frequency of the patterned illumination is changing from zero to the incoherent cut-off frequency of the imaging system. Our results show that increasing the amount of imposed information by using a higher modulation frequency of the illumination pattern does not always yield a better restoration performance, which was found to be depended on the underlying object. Results from model-based restoration show performance improvement, quantified by an up to 62% drop in the mean square error compared to standard reconstruction, with increasing modulation frequency. However, we found cases for which results obtained with standard reconstruction methods do not always follow the same trend.

  17. Low-frequency blood pressure oscillations and inotrope treatment failure in premature infants.

    PubMed

    Vesoulis, Zachary A; Hao, Jessica; McPherson, Christopher; El Ters, Nathalie M; Mathur, Amit M

    2017-07-01

    The underlying mechanism as to why some hypotensive preterm infants do not respond to inotropic medications remains unclear. For these infants, we hypothesize that impaired vasomotor function is a significant factor and is manifested through a decrease in low-frequency blood pressure variability across regulatory components of vascular tone. Infants born ≤28 wk estimated gestational age underwent prospective recording of mean arterial blood pressure for 72 h after birth. After error correction, root-mean-square spectral power was calculated for each valid 10-min data frame across each of four frequency bands ( B1 , 0.005-0.0095 Hz; B2 , 0.0095-0.02 Hz; B3 , 0.02-0.06 Hz; and B4 , 0.06-0.16) corresponding to different components of vasomotion control. Forty infants (twenty-nine normotensive control and eleven inotrope-exposed) were included with a mean ± SD estimated gestational age of 25.2 ± 1.6 wk and birth weight 790 ± 211 g. 9.7/11.8 Million (82%) data points were error-free and used for analysis. Spectral power across all frequency bands increased with time, although the magnitude was 20% less in the inotrope-exposed infants. A statistically significant increase in spectral power in response to inotrope initiation was noted across all frequency bands. Infants with robust blood pressure response to inotropes had a greater increase compared with those who had limited or no blood pressure response. In this study, hypotensive infants who require inotropes have decreased low-frequency variability at baseline compared with normotensive infants, which increases after inotrope initiation. Low-frequency spectral power does not change for those with inotrope treatment failure, suggesting dysfunctional regulation of vascular tone as a potential mechanism of treatment failure. NEW & NOTEWORTHY In this study, we examine patterns of low-frequency oscillations in blood pressure variability across regulatory components of vascular tone in normotensive and hypotensive infants exposed to inotropic medications. We found that hypotensive infants who require inotropes have decreased low-frequency variability at baseline, which increases after inotrope initiation. Low-frequency spectral power does not change for those with inotrope treatment failure, suggesting dysfunctional regulation of vascular tone as a potential mechanism of treatment failure. Copyright © 2017 the American Physiological Society.

  18. Reduction of low frequency error for SED36 and APS based HYDRA star trackers

    NASA Astrophysics Data System (ADS)

    Ouaknine, Julien; Blarre, Ludovic; Oddos-Marcel, Lionel; Montel, Johan; Julio, Jean-Marc

    2017-11-01

    In the frame of the CNES Pleiades satellite, a reduction of the star tracker low frequency error, which is the most penalizing error for the satellite attitude control, was performed. For that purpose, the SED36 star tracker was developed, with a design based on the flight qualified SED16/26. In this paper, the SED36 main features will be first presented. Then, the reduction process of the low frequency error will be developed, particularly the optimization of the optical distortion calibration. The result is an attitude low frequency error of 1.1" at 3 sigma along transverse axes. The implementation of these improvements to HYDRA, the new multi-head APS star tracker developed by SODERN, will finally be presented.

  19. Spectral estimation for characterization of acoustic aberration.

    PubMed

    Varslot, Trond; Angelsen, Bjørn; Waag, Robert C

    2004-07-01

    Spectral estimation based on acoustic backscatter from a motionless stochastic medium is described for characterization of aberration in ultrasonic imaging. The underlying assumptions for the estimation are: The correlation length of the medium is short compared to the length of the transmitted acoustic pulse, an isoplanatic region of sufficient size exists around the focal point, and the backscatter can be modeled as an ergodic stochastic process. The motivation for this work is ultrasonic imaging with aberration correction. Measurements were performed using a two-dimensional array system with 80 x 80 transducer elements and an element pitch of 0.6 mm. The f number for the measurements was 1.2 and the center frequency was 3.0 MHz with a 53% bandwidth. Relative phase of aberration was extracted from estimated cross spectra using a robust least-mean-square-error method based on an orthogonal expansion of the phase differences of neighboring wave forms as a function of frequency. Estimates of cross-spectrum phase from measurements of random scattering through a tissue-mimicking aberrator have confidence bands approximately +/- 5 degrees wide. Both phase and magnitude are in good agreement with a reference characterization obtained from a point scatterer.

  20. Estimation of the two-dimensional presampled modulation transfer function of digital radiography devices using one-dimensional test objects

    PubMed Central

    Wells, Jered R.; Dobbins, James T.

    2012-01-01

    Purpose: The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1D test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Methods: Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ/i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. Results: The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm−1) and approximate circular symmetry at frequencies below 4 mm−1. While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm−1. Slit measurement near 45° revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm × 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm−1) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Conclusions: Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation. PMID:23039654

  1. Estimation of the two-dimensional presampled modulation transfer function of digital radiography devices using one-dimensional test objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, Jered R.; Dobbins, James T. III; Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705

    2012-10-15

    Purpose: The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1Dmore » test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Methods: Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ/i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. Results: The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm{sup -1}) and approximate circular symmetry at frequencies below 4 mm{sup -1}. While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm{sup -1}. Slit measurement near 45 Degree-Sign revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm Multiplication-Sign 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm{sup -1}) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Conclusions: Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation.« less

  2. Estimation of the two-dimensional presampled modulation transfer function of digital radiography devices using one-dimensional test objects.

    PubMed

    Wells, Jered R; Dobbins, James T

    2012-10-01

    The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1D test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ∕i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm(-1)) and approximate circular symmetry at frequencies below 4 mm(-1). While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm(-1). Slit measurement near 45° revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm × 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm(-1)) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation.

  3. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  4. An Assessment of State-of-the-Art Mean Sea Surface and Geoid Models of the Arctic Ocean: Implications for Sea Ice Freeboard Retrieval

    NASA Astrophysics Data System (ADS)

    Skourup, Henriette; Farrell, Sinéad Louise; Hendricks, Stefan; Ricker, Robert; Armitage, Thomas W. K.; Ridout, Andy; Andersen, Ole Baltazar; Haas, Christian; Baker, Steven

    2017-11-01

    State-of-the-art Arctic Ocean mean sea surface (MSS) models and global geoid models (GGMs) are used to support sea ice freeboard estimation from satellite altimeters, as well as in oceanographic studies such as mapping sea level anomalies and mean dynamic ocean topography. However, errors in a given model in the high-frequency domain, primarily due to unresolved gravity features, can result in errors in the estimated along-track freeboard. These errors are exacerbated in areas with a sparse lead distribution in consolidated ice pack conditions. Additionally model errors can impact ocean geostrophic currents, derived from satellite altimeter data, while remaining biases in these models may impact longer-term, multisensor oceanographic time series of sea level change in the Arctic. This study focuses on an assessment of five state-of-the-art Arctic MSS models (UCL13/04 and DTU15/13/10) and a commonly used GGM (EGM2008). We describe errors due to unresolved gravity features, intersatellite biases, and remaining satellite orbit errors, and their impact on the derivation of sea ice freeboard. The latest MSS models, incorporating CryoSat-2 sea surface height measurements, show improved definition of gravity features, such as the Gakkel Ridge. The standard deviation between models ranges 0.03-0.25 m. The impact of remaining MSS/GGM errors on freeboard retrieval can reach several decimeters in parts of the Arctic. While the maximum observed freeboard difference found in the central Arctic was 0.59 m (UCL13 MSS minus EGM2008 GGM), the standard deviation in freeboard differences is 0.03-0.06 m.

  5. Effects of energy chirp on bunch length measurement in linear accelerator beams

    NASA Astrophysics Data System (ADS)

    Sabato, L.; Arpaia, P.; Giribono, A.; Liccardo, A.; Mostacci, A.; Palumbo, L.; Vaccarezza, C.; Variola, A.

    2017-08-01

    The effects of assumptions about bunch properties on the accuracy of the measurement method of the bunch length based on radio frequency deflectors (RFDs) in electron linear accelerators (LINACs) are investigated. In particular, when the electron bunch at the RFD has a non-negligible energy chirp (i.e. a correlation between the longitudinal positions and energies of the particle), the measurement is affected by a deterministic intrinsic error, which is directly related to the RFD phase offset. A case study on this effect in the electron LINAC of a gamma beam source at the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) is reported. The relative error is estimated by using an electron generation and tracking (ELEGANT) code to define the reference measurements of the bunch length. The relative error is proved to increase linearly with the RFD phase offset. In particular, for an offset of {{7}\\circ} , corresponding to a vertical centroid offset at a screen of about 1 mm, the relative error is 4.5%.

  6. Comparison of flood frequency estimates from synthetic and observed data on small drainage areas in Mississippi

    USGS Publications Warehouse

    Colson, B.E.

    1986-01-01

    In 1964 the U.S. Geological Survey in Mississippi expanded the small stream gaging network for collection of rainfall and runoff data to 92 stations. To expedite availability of flood frequency information a rainfall-runoff model using available long-term rainfall data was calibrated to synthesize flood peaks. Results obtained from observed annual peak flow data for 51 sites having 16 yr to 30 yr of annual peaks are compared with the synthetic results. Graphical comparison of the 2, 5, 10, 25, 50, and 100-year flood discharges indicate good agreement. The root mean square error ranges from 27% to 38% and the synthetic record bias from -9% to -18% in comparison with the observed record. The reduced variance in the synthetic results is attributed to use of only four long-term rainfall records and model limitations. The root mean square error and bias is within the accuracy considered to be satisfactory. (Author 's abstract)

  7. Evaluation of Satellite and Model Precipitation Products Over Turkey

    NASA Astrophysics Data System (ADS)

    Yilmaz, M. T.; Amjad, M.

    2017-12-01

    Satellite-based remote sensing, gauge stations, and models are the three major platforms to acquire precipitation dataset. Among them satellites and models have the advantage of retrieving spatially and temporally continuous and consistent datasets, while the uncertainty estimates of these retrievals are often required for many hydrological studies to understand the source and the magnitude of the uncertainty in hydrological response parameters. In this study, satellite and model precipitation data products are validated over various temporal scales (daily, 3-daily, 7-daily, 10-daily and monthly) using in-situ measured precipitation observations from a network of 733 gauges from all over the Turkey. Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 version 7 and European Center of Medium-Range Weather Forecast (ECMWF) model estimates (daily, 3-daily, 7-daily and 10-daily accumulated forecast) are used in this study. Retrievals are evaluated for their mean and standard deviation and their accuracies are evaluated via bias, root mean square error, error standard deviation and correlation coefficient statistics. Intensity vs frequency analysis and some contingency table statistics like percent correct, probability of detection, false alarm ratio and critical success index are determined using daily time-series. Both ECMWF forecasts and TRMM observations, on average, overestimate the precipitation compared to gauge estimates; wet biases are 10.26 mm/month and 8.65 mm/month, respectively for ECMWF and TRMM. RMSE values of ECMWF forecasts and TRMM estimates are 39.69 mm/month and 41.55 mm/month, respectively. Monthly correlations between Gauges-ECMWF, Gauges-TRMM and ECMWF-TRMM are 0.76, 0.73 and 0.81, respectively. The model and the satellite error statistics are further compared against the gauges error statistics based on inverse distance weighting (IWD) analysis. Both the model and satellite data have less IWD errors (14.72 mm/month and 10.75 mm/month, respectively) compared to gauges IWD error (21.58 mm/month). These results show that, on average, ECMWF forecast data have higher skill than TRMM observations. Overall, both ECMWF forecast data and TRMM observations show good potential for catchment scale hydrological analysis.

  8. Portal imaging based definition of the planning target volume during pelvic irradiation for gynecological malignancies.

    PubMed

    Mock, U; Dieckmann, K; Wolff, U; Knocke, T H; Pötter, R

    1999-08-01

    Geometrical accuracy in patient positioning can vary substantially during external radiotherapy. This study estimated the set-up accuracy during pelvic irradiation for gynecological malignancies for determination of safety margins (planning target volume, PTV). Based on electronic portal imaging devices (EPID), 25 patients undergoing 4-field pelvic irradiation for gynecological malignancies were analyzed with regard to set-up accuracy during the treatment course. Regularly performed EPID images were used in order to systematically assess the systematic and random component of set-up displacements. Anatomical matching of verification and simulation images was followed by measuring corresponding distances between the central axis and anatomical features. Data analysis of set-up errors referred to the x-, y-,and z-axes. Additionally, cumulative frequencies were evaluated. A total of 50 simulation films and 313 verification images were analyzed. For the anterior-posterior (AP) beam direction mean deviations along the x- and z-axes were 1.5 mm and -1.9 mm, respectively. Moreover, random errors of 4.8 mm (x-axis) and 3.0 mm (z-axis) were determined. Concerning the latero-lateral treatment fields, the systematic errors along the two axes were calculated to 2.9 mm (y-axis) and -2.0 mm (z-axis) and random errors of 3.8 mm and 3.5 mm were found, respectively. The cumulative frequency of misalignments < or =5 mm showed values of 75% (AP fields) and 72% (latero-lateral fields). With regard to cumulative frequencies < or =10 mm quantification revealed values of 97% for both beam directions. During external pelvic irradiation therapy for gynecological malignancies, EPID images on a regular basis revealed acceptable set-up inaccuracies. Safety margins (PTV) of 1 cm appear to be sufficient, accounting for more than 95% of all deviations.

  9. Magnitude and Frequency of Floods on Nontidal Streams in Delaware

    USGS Publications Warehouse

    Ries, Kernell G.; Dillow, Jonathan J.A.

    2006-01-01

    Reliable estimates of the magnitude and frequency of annual peak flows are required for the economical and safe design of transportation and water-conveyance structures. This report, done in cooperation with the Delaware Department of Transportation (DelDOT) and the Delaware Geological Survey (DGS), presents methods for estimating the magnitude and frequency of floods on nontidal streams in Delaware at locations where streamgaging stations monitor streamflow continuously and at ungaged sites. Methods are presented for estimating the magnitude of floods for return frequencies ranging from 2 through 500 years. These methods are applicable to watersheds exhibiting a full range of urban development conditions. The report also describes StreamStats, a web application that makes it easy to obtain flood-frequency estimates for user-selected locations on Delaware streams. Flood-frequency estimates for ungaged sites are obtained through a process known as regionalization, using statistical regression analysis, where information determined for a group of streamgaging stations within a region forms the basis for estimates for ungaged sites within the region. One hundred and sixteen streamgaging stations in and near Delaware with at least 10 years of non-regulated annual peak-flow data available were used in the regional analysis. Estimates for gaged sites are obtained by combining the station peak-flow statistics (mean, standard deviation, and skew) and peak-flow estimates with regional estimates of skew and flood-frequency magnitudes. Example flood-frequency estimate calculations using the methods presented in the report are given for: (1) ungaged sites, (2) gaged locations, (3) sites upstream or downstream from a gaged location, and (4) sites between gaged locations. Regional regression equations applicable to ungaged sites in the Piedmont and Coastal Plain Physiographic Provinces of Delaware are presented. The equations incorporate drainage area, forest cover, impervious area, basin storage, housing density, soil type A, and mean basin slope as explanatory variables, and have average standard errors of prediction ranging from 28 to 72 percent. Additional regression equations that incorporate drainage area and housing density as explanatory variables are presented for use in defining the effects of urbanization on peak-flow estimates throughout Delaware for the 2-year through 500-year recurrence intervals, along with suggestions for their appropriate use in predicting development-affected peak flows. Additional topics associated with the analyses performed during the study are also discussed, including: (1) the availability and description of more than 30 basin and climatic characteristics considered during the development of the regional regression equations; (2) the treatment of increasing trends in the annual peak-flow series identified at 18 gaged sites, with respect to their relations with maximum 24-hour precipitation and housing density, and their use in the regional analysis; (3) calculation of the 90-percent confidence interval associated with peak-flow estimates from the regional regression equations; and (4) a comparison of flood-frequency estimates at gages used in a previous study, highlighting the effects of various improved analytical techniques.

  10. Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong

    2018-06-01

    The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.

  11. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  12. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  13. Skilled adult readers activate the meanings of high-frequency words using phonology: Evidence from eye tracking.

    PubMed

    Jared, Debra; O'Donnell, Katrina

    2017-02-01

    We examined whether highly skilled adult readers activate the meanings of high-frequency words using phonology when reading sentences for meaning. A homophone-error paradigm was used. Sentences were written to fit 1 member of a homophone pair, and then 2 other versions were created in which the homophone was replaced by its mate or a spelling-control word. The error words were all high-frequency words, and the correct homophones were either higher-frequency words or low-frequency words-that is, the homophone errors were either the subordinate or dominant member of the pair. Participants read sentences as their eye movements were tracked. When the high-frequency homophone error words were the subordinate member of the homophone pair, participants had shorter immediate eye-fixation latencies on these words than on matched spelling-control words. In contrast, when the high-frequency homophone error words were the dominant member of the homophone pair, a difference between these words and spelling controls was delayed. These findings provide clear evidence that the meanings of high-frequency words are activated by phonological representations when skilled readers read sentences for meaning. Explanations of the differing patterns of results depending on homophone dominance are discussed.

  14. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  15. Estimation of flood discharges at selected annual exceedance probabilities for unregulated, rural streams in Vermont, with a section on Vermont regional skew regression

    USGS Publications Warehouse

    Olson, Scott A.; with a section by Veilleux, Andrea G.

    2014-01-01

    This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.

  16. Catch-up saccades in head-unrestrained conditions reveal that saccade amplitude is corrected using an internal model of target movement

    PubMed Central

    Daye, Pierre M.; Blohm, Gunnar; Lefèvre, Phillippe

    2014-01-01

    This study analyzes how human participants combine saccadic and pursuit gaze movements when they track an oscillating target moving along a randomly oriented straight line with the head free to move. We found that to track the moving target appropriately, participants triggered more saccades with increasing target oscillation frequency to compensate for imperfect tracking gains. Our sinusoidal paradigm allowed us to show that saccade amplitude was better correlated with internal estimates of position and velocity error at saccade onset than with those parameters 100 ms before saccade onset as head-restrained studies have shown. An analysis of saccadic onset time revealed that most of the saccades were triggered when the target was accelerating. Finally, we found that most saccades were triggered when small position errors were combined with large velocity errors at saccade onset. This could explain why saccade amplitude was better correlated with velocity error than with position error. Therefore, our results indicate that the triggering mechanism of head-unrestrained catch-up saccades combines position and velocity error at saccade onset to program and correct saccade amplitude rather than using sensory information 100 ms before saccade onset. PMID:24424378

  17. Recommendations to Improve the Accuracy of Estimates of Physical Activity Derived from Self Report

    PubMed Central

    Ainsworth, Barbara E; Caspersen, Carl J; Matthews, Charles E; Mâsse, Louise C; Baranowski, Tom; Zhu, Weimo

    2013-01-01

    Context Assessment of physical activity using self-report has the potential for measurement error that can lead to incorrect inferences about physical activity behaviors and bias study results. Objective To provide recommendations to improve the accuracy of physical activity derived from self report. Process We provide an overview of presentations and a compilation of perspectives shared by the authors of this paper and workgroup members. Findings We identified a conceptual framework for reducing errors using physical activity self-report questionnaires. The framework identifies six steps to reduce error: (1) identifying the need to measure physical activity, (2) selecting an instrument, (3) collecting data, (4) analyzing data, (5) developing a summary score, and (6) interpreting data. Underlying the first four steps are behavioral parameters of type, intensity, frequency, and duration of physical activities performed, activity domains, and the location where activities are performed. We identified ways to reduce measurement error at each step and made recommendations for practitioners, researchers, and organizational units to reduce error in questionnaire assessment of physical activity. Conclusions Self-report measures of physical activity have a prominent role in research and practice settings. Measurement error can be reduced by applying the framework discussed in this paper. PMID:22287451

  18. Tolerance of the frequency deviation of LO sources at a MIMO system

    NASA Astrophysics Data System (ADS)

    Xiao, Jiangnan; Li, Xingying; Zhang, Zirang; Xu, Yuming; Chen, Long; Yu, Jianjun

    2015-11-01

    We analyze and simulate the tolerance of frequency offset at a W-band optical-wireless transmission system. The transmission system adopts optical polarization division multiplexing (PDM), and multiple-input multiple-output (MIMO) reception. The transmission signal adopts optical quadrature phase shift keying (QPSK) modulation, and the generation of millimeter-wave is based on the optical heterodyning technique. After 20-km single-mode fiber-28 (SMF-28) transmission, tens of Gb/s millimeter-wave signal is delivered. At the receiver, two millimeter-wave signals are down-converted into electrical intermediate-frequency (IF) signals in the analog domain by mixing with two electrical local oscillators (LOs) with different frequencies. We investigate the different frequency LO effect on the 2×2 MIMO system performance for the first time, finding that the process during DSP of implementing frequency offset estimation (FOE) before cascaded multi-modulus-algorithm (CMMA) equalization can get rid of the inter-channel interference (ICI) and improve system bit-error-ratio (BER) performance in this type of transmission system.

  19. Dual-sensitivity profilometry with defocused projection of binary fringes.

    PubMed

    Garnica, G; Padilla, M; Servin, M

    2017-10-01

    A dual-sensitivity profilometry technique based on defocused projection of binary fringes is presented. Here, two sets of fringe patterns with a sinusoidal profile are produced by applying the same analog low-pass filter (projector defocusing) to binary fringes with a high- and low-frequency spatial carrier. The high-frequency fringes have a binary square-wave profile, while the low-frequency binary fringes are produced with error-diffusion dithering. The binary nature of the binary fringes removes the need for calibration of the projector's nonlinear gamma. Working with high-frequency carrier fringes, we obtain a high-quality wrapped phase. On the other hand, working with low-frequency carrier fringes we found a lower-quality, nonwrapped phase map. The nonwrapped estimation is used as stepping stone for dual-sensitivity temporal phase unwrapping, extending the applicability of the technique to discontinuous (piecewise continuous) surfaces. We are proposing a single defocusing level for faster high- and low-frequency fringe data acquisition. The proposed technique is validated with experimental results.

  20. Estimating residual fault hitting rates by recapture sampling

    NASA Technical Reports Server (NTRS)

    Lee, Larry; Gupta, Rajan

    1988-01-01

    For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.

  1. Compensating for estimation smoothing in kriging

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky, Vera

    1996-01-01

    Smoothing is a characteristic inherent to all minimum mean-square-error spatial estimators such as kriging. Cross-validation can be used to detect and model such smoothing. Inversion of the model produces a new estimator-compensated kriging. A numerical comparison based on an exhaustive permeability sampling of a 4-fr2 slab of Berea Sandstone shows that the estimation surface generated by compensated kriging has properties intermediate between those generated by ordinary kriging and stochastic realizations resulting from simulated annealing and sequential Gaussian simulation. The frequency distribution is well reproduced by the compensated kriging surface, which also approximates the experimental semivariogram well - better than ordinary kriging, but not as well as stochastic realizations. Compensated kriging produces surfaces that are more accurate than stochastic realizations, but not as accurate as ordinary kriging. ?? 1996 International Association for Mathematical Geology.

  2. Cost effective stream-gaging strategies for the Lower Colorado River basin; the Blythe field office operations

    USGS Publications Warehouse

    Moss, Marshall E.; Gilroy, Edward J.

    1980-01-01

    This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)

  3. Search for Long Period Solar Normal Modes in Ambient Seismic Noise

    NASA Astrophysics Data System (ADS)

    Caton, R.; Pavlis, G. L.

    2016-12-01

    We search for evidence of solar free oscillations (normal modes) in long period seismic data through multitaper spectral analysis of array stacks. This analysis is similar to that of Thomson & Vernon (2015), who used data from the most quiet single stations of the global seismic network. Our approach is to use stacks of large arrays of noisier stations to reduce noise. Arrays have the added advantage of permitting the use of nonparametic statistics (jackknife errors) to provide objective error estimates. We used data from the Transportable Array, the broadband borehole array at Pinyon Flat, and the 3D broadband array in Homestake Mine in Lead, SD. The Homestake Mine array has 15 STS-2 sensors deployed in the mine that are extremely quiet at long periods due to stable temperatures and stable piers anchored to hard rock. The length of time series used ranged from 50 days to 85 days. We processed the data by low-pass filtering with a corner frequency of 10 mHz, followed by an autoregressive prewhitening filter and median stack. We elected to use the median instead of the mean in order to get a more robust stack. We then used G. Prieto's mtspec library to compute multitaper spectrum estimates on the data. We produce delete-one jackknife error estimates of the uncertainty at each frequency by computing median stacks of all data with one station removed. The results from the TA data show tentative evidence for several lines between 290 μHz and 400 μHz, including a recurring line near 379 μHz. This 379 μHz line is near the Earth mode 0T2 and the solar mode 5g5, suggesting that 5g5 could be coupling into the Earth mode. Current results suggest more statistically significant lines may be present in Pinyon Flat data, but additional processing of the data is underway to confirm this observation.

  4. Fast Bayesian approach for modal identification using forced vibration data considering the ambient effect

    NASA Astrophysics Data System (ADS)

    Ni, Yan-Chun; Zhang, Feng-Liang

    2018-05-01

    Modal identification based on vibration response measured from real structures is becoming more popular, especially after benefiting from the great improvement of the measurement technology. The results are reliable to estimate the dynamic performance, which fits the increasing requirement of different design configurations of the new structures. However, the high-quality vibration data collection technology calls for a more accurate modal identification method to improve the accuracy of the results. Through the whole measurement process of dynamic testing, there are many aspects that will cause the rise of uncertainty, such as measurement noise, alignment error and modeling error, since the test conditions are not directly controlled. Depending on these demands, a Bayesian statistical approach is developed in this work to estimate the modal parameters using the forced vibration response of structures, simultaneously considering the effect of the ambient vibration. This method makes use of the Fast Fourier Transform (FFT) of the data in a selected frequency band to identify the modal parameters of the mode dominating this frequency band and estimate the remaining uncertainty of the parameters correspondingly. In the existing modal identification methods for forced vibration, it is generally assumed that the forced vibration response dominates the measurement data and the influence of the ambient vibration response is ignored. However, ambient vibration will cause modeling error and affect the accuracy of the identified results. The influence is shown in the spectra as some phenomena that are difficult to explain and irrelevant to the mode to be identified. These issues all mean that careful choice of assumptions in the identification model and fundamental formulation to account for uncertainty are necessary. During the calculation, computational difficulties associated with calculating the posterior statistics are addressed. Finally, a fast computational algorithm is proposed so that the method can be practically implemented. Numerical verification with synthetic data and applicable investigation with full-scale field structures data are all carried out for the proposed method.

  5. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, M D; Cole, S; Frenk, C S

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less

  6. Using frequency response functions to manage image degradation from equipment vibration in the Daniel K. Inouye Solar Telescope

    NASA Astrophysics Data System (ADS)

    McBride, William R.; McBride, Daniel R.

    2016-08-01

    The Daniel K Inouye Solar Telescope (DKIST) will be the largest solar telescope in the world, providing a significant increase in the resolution of solar data available to the scientific community. Vibration mitigation is critical in long focal-length telescopes such as the Inouye Solar Telescope, especially when adaptive optics are employed to correct for atmospheric seeing. For this reason, a vibration error budget has been implemented. Initially, the FRFs for the various mounting points of ancillary equipment were estimated using the finite element analysis (FEA) of the telescope structures. FEA analysis is well documented and understood; the focus of this paper is on the methods involved in estimating a set of experimental (measured) transfer functions of the as-built telescope structure for the purpose of vibration management. Techniques to measure low-frequency single-input-single-output (SISO) frequency response functions (FRF) between vibration source locations and image motion on the focal plane are described. The measurement equipment includes an instrumented inertial-mass shaker capable of operation down to 4 Hz along with seismic accelerometers. The measurement of vibration at frequencies below 10 Hz with good signal-to-noise ratio (SNR) requires several noise reduction techniques including high-performance windows, noise-averaging, tracking filters, and spectral estimation. These signal-processing techniques are described in detail.

  7. Models and methods to characterize site amplification from a pair of records

    USGS Publications Warehouse

    Safak, E.

    1997-01-01

    The paper presents a tutorial review of the models and methods that are used to characterize site amplification from the pairs of rock- and soil-site records, and introduces some new techniques with better theoretical foundations. The models and methods discussed include spectral and cross-spectral ratios, spectral ratios for downhole records, response spectral ratios, constant amplification factors, parametric models, physical models, and time-varying filters. An extensive analytical and numerical error analysis of spectral and cross-spectral ratios shows that probabilistically cross-spectral ratios give more reliable estimates of site amplification. Spectral ratios should not be used to determine site amplification from downhole-surface recording pairs because of the feedback in the downhole sensor. Response spectral ratios are appropriate for low frequencies, but overestimate the amplification at high frequencies. The best method to be used depends on how much precision is required in the estimates.

  8. Experimental research of UWB over fiber system employing 128-QAM and ISFA-optimized scheme

    NASA Astrophysics Data System (ADS)

    He, Jing; Xiang, Changqing; Long, Fengting; Chen, Zuo

    2018-05-01

    In this paper, an optimized intra-symbol frequency-domain averaging (ISFA) scheme is proposed and experimentally demonstrated in intensity-modulation and direct-detection (IMDD) multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system. According to the channel responses of three MB-OFDM UWB sub-bands, the optimal ISFA window size for each sub-band is investigated. After 60-km standard single mode fiber (SSMF) transmission, the experimental results show that, at the bit error rate (BER) of 3.8 × 10-3, the receiver sensitivity of 128-quadrature amplitude modulation (QAM) can be improved by 1.9 dB using the proposed enhanced ISFA combined with training sequence (TS)-based channel estimation scheme, compared with the conventional TS-based channel estimation. Moreover, the spectral efficiency (SE) is up to 5.39 bit/s/Hz.

  9. Bias in error estimation when using cross-validation for model selection.

    PubMed

    Varma, Sudhir; Simon, Richard

    2006-02-23

    Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.

  10. Parallel computers - Estimate errors caused by imprecise data

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne

    1991-01-01

    A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.

  11. Fried food intake estimated by the multiple source method is associated with gestational weight gain.

    PubMed

    Sartorelli, Daniela S; Barbieri, Patrícia; Perdoná, Gleici C S

    2014-08-01

    This present study aimed to test the association between fried food intake estimated by a semiquantitative food frequency questionnaire (FFQ), multiple 24-hour dietary recalls (24hRs), and the application of the multiple source method (MSM) in relation to gestational weight gain at the second and third trimesters and weight gain ratio (observed weight gain/expected weight gain). We hypothesized that distinct relationships with weight gain would be found given the measurement errors of self-reported dietary approaches. A prospective study was conducted with 88 adult pregnant women. Fried food intake during pregnancy was assessed using a validated 85-item FFQ, two to six 24hRs per woman, and the MSM with and without frequency of food intake as covariate. Linear regression models were used to evaluate the relationship between fried food estimated by the methods and weight gain. For every 100-g increment of fried food intake, the β (95% confidence interval) for weight gain was β 1.87 (0.34, 3.40) and β 2.00 0.55, 3.45) for estimates using MSM with and without the frequency of intake as covariate, respectively, after multiple adjustments. We found that fried food intake estimated by the FFQ and 24hRs β 0.40 (-0.68, 1.48) and β 0.49 (-0.53, 1.52), respectively, was unrelated to weight gain. In relation to weight gain ratio, a positive association was found for estimates using the MSM with [β 0.29 (0.03, 0.54)] and without the frequency of intake as covariate [β 0.31 (0.07, 0.55)]; and no associations were found for estimates by the FFQ or 24hRs. The data showed that fried food intake estimated the MSM, but not by the FFQ and 24hRs, is associated with excessive weight gain during pregnancy. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Single Platform Geolocation of Radio Frequency Emitters

    DTIC Science & Technology

    2015-03-26

    Error SNR Signal to Noise Ratio SOI Signal of Interest STK Systems Tool Kit UCA Uniform Circular Array WGS World Geodetic System xv SINGLE PLATFORM...Section 2.6 describes a method to visualize the confidence of estimated parameters. 2.1 Coordinate Systems and Reference Frames The following...be used to visualize the confidence surface using the method developed in Section 2.6. The NLO method will be shown to be the minimization of the

  13. 40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...

  14. 40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...

  15. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  16. Frequency of pediatric medication administration errors and contributing factors.

    PubMed

    Ozkan, Suzan; Kocaman, Gulseren; Ozturk, Candan; Seren, Seyda

    2011-01-01

    This study examined the frequency of pediatric medication administration errors and contributing factors. This research used the undisguised observation method and Critical Incident Technique. Errors and contributing factors were classified through the Organizational Accident Model. Errors were made in 36.5% of the 2344 doses that were observed. The most frequent errors were those associated with administration at the wrong time. According to the results of this study, errors arise from problems within the system.

  17. Gaucher disease: Gene frequencies in the Ashkenazi Jewish population

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutler, E.; West, C.; Gelbart, T.

    1993-01-01

    DNA from over 2,000 Ashkenazi Jewish subjects has been examined for the four most common Jewish Gaucher disease mutations, which collectively account for about 96% of the disease-producing alleles in Jewish patients. This population survey has made possible the estimation of gene frequencies for these alleles. Eighty-seven of 1,528 individuals were heterozygous for the 1226G (N370S) mutation, and four presumably well persons were homozygous for this mutation. The gene frequency for the 1226G allele was calculated to be .0311, and when these data were pooled with those obtained previously from another 593 Jewish subjects, a gene frequency of .032 withmore » a standard error of .004 was found. Among 2,305 normal subjects, 10 were found to be heterozygous for the 84GG allele, giving a gene frequency of .00217 with a standard error of .00096. No examples of the IVS2(+1) mutation were found among 1,256 samples screened, and no 1448C (L444P) mutations were found among 1,528 samples examined. Examination of the distribution of Gaucher disease gene frequencies in the general population shows that the ratio of 1226G mutations to 84GG mutations is higher than that in the patient population. This is presumed to be due to the fact that homozygotes for the 1226G mutation often have late-onset disease or no significant clinical manifestations at all. To bring the gene frequency in the patient population into conformity with the gene frequency in the general population, nearly two-thirds of persons with a Gaucher disease genotype would be missing from the patient population, presumably because their clinical manifestations were very mild. 10 refs., 3 tabs.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisk, Mark D.; Pasyanos, Michael E.

    Characterizing regional seismic signals continues to be a difficult problem due to their variability. Calibration of these signals is very important to many aspects of monitoring underground nuclear explosions, including detecting seismic signals, discriminating explosions from earthquakes, and reliably estimating magnitude and yield. Amplitude tomography, which simultaneously inverts for source, propagation, and site effects, is a leading method of calibrating these signals. A major issue in amplitude tomography is the data quality of the input amplitude measurements. Pre-event and prephase signal-to-noise ratio (SNR) tests are typically used but can frequently include bad signals and exclude good signals. The deficiencies ofmore » SNR criteria, which are demonstrated here, lead to large calibration errors. To ameliorate these issues, we introduce a semi-automated approach to assess the bandwidth of a spectrum where it behaves physically. We determine the maximum frequency (denoted as F max) where it deviates from this behavior due to inflections at which noise or spurious signals start to bias the spectra away from the expected decay. We compare two amplitude tomography runs using the SNR and new F max criteria and show significant improvements to the stability and accuracy of the tomography output for frequency bands higher than 2 Hz by using our assessments of valid S-wave bandwidth. We compare Q estimates, P/S residuals, and some detailed results to explain the improvements. Lastly, for frequency bands higher than 4 Hz, needed for effective P/S discrimination of explosions from earthquakes, the new bandwidth criteria sufficiently fix the instabilities and errors so that the residuals and calibration terms are useful for application.« less

  19. A study of GPS measurement errors due to noise and multipath interference for CGADS

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.

    1996-01-01

    This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.

  20. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

Top