Sample records for frequency estimation method

  1. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  2. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  3. A new method of hybrid frequency hopping signals selection and blind parameter estimation

    NASA Astrophysics Data System (ADS)

    Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian

    2018-04-01

    Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.

  4. A Novel Residual Frequency Estimation Method for GNSS Receivers.

    PubMed

    Nguyen, Tu Thi-Thanh; La, Vinh The; Ta, Tung Hai

    2018-01-04

    In Global Navigation Satellite System (GNSS) receivers, residual frequency estimation methods are traditionally applied in the synchronization block to reduce the transient time from acquisition to tracking, or they are used within the frequency estimator to improve its accuracy in open-loop architectures. There are several disadvantages in the current estimation methods, including sensitivity to noise and wide search space size. This paper proposes a new residual frequency estimation method depending on differential processing. Although the complexity of the proposed method is higher than the one of traditional methods, it can lead to more accurate estimates, without increasing the size of the search space.

  5. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  6. Power system frequency estimation based on an orthogonal decomposition method

    NASA Astrophysics Data System (ADS)

    Lee, Chih-Hung; Tsai, Men-Shen

    2018-06-01

    In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.

  7. A Simple Joint Estimation Method of Residual Frequency Offset and Sampling Frequency Offset for DVB Systems

    NASA Astrophysics Data System (ADS)

    Kwon, Ki-Won; Cho, Yongsoo

    This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.

  8. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals

    PubMed Central

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610

  9. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.

    PubMed

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.

  10. New instantaneous frequency estimation method based on the use of image processing techniques

    NASA Astrophysics Data System (ADS)

    Borda, Monica; Nafornita, Ioan; Isar, Alexandru

    2003-05-01

    The aim of this paper is to present a new method for the estimation of the instantaneous frequency of a frequency modulated signal, corrupted by additive noise. This method represents an example of fusion of two theories: the time-frequency representations and the mathematical morphology. Any time-frequency representation of a useful signal is concentrated around its instantaneous frequency law and realizes the diffusion of the noise that perturbs the useful signal in the time - frequency plane. In this paper a new time-frequency representation, useful for the estimation of the instantaneous frequency, is proposed. This time-frequency representation is the product of two others time-frequency representations: the Wigner - Ville time-frequency representation and a new one obtained by filtering with a hard thresholding filter the Gabor representation of the signal to be processed. Using the image of this new time-frequency representation the instantaneous frequency of the useful signal can be extracted with the aid of some mathematical morphology operators: the conversion in binary form, the dilation and the skeleton. The simulations of the proposed method have proved its qualities. It is better than other estimation methods, like those based on the use of adaptive notch filters.

  11. A Pitch Extraction Method with High Frequency Resolution for Singing Evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo

    This paper proposes a pitch estimation method suitable for singing evaluation incorporable in KARAOKE machines. Professional singers and musicians have sharp hearing for music and singing voice. They recognize that singer's voice pitch is “a little off key” or “be in tune”. In the same way, the pitch estimation method that has high frequency resolution is necessary in order to evaluate singing. This paper proposes a pitch estimation method with high frequency resolution utilizing harmonic characteristic of autocorrelation function. The proposed method can estimate a fundamental frequency in the range 50 ∼ 1700[Hz] with resolution less than 3.6 cents in light processing.

  12. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  13. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  14. Radar modulation classification using time-frequency representation and nonlinear regression

    NASA Astrophysics Data System (ADS)

    De Luigi, Christophe; Arques, Pierre-Yves; Lopez, Jean-Marc; Moreau, Eric

    1999-09-01

    In naval electronic environment, pulses emitted by radars are collected by ESM receivers. For most of them the intrapulse signal is modulated by a particular law. To help the classical identification process, a classification and estimation of this modulation law is applied on the intrapulse signal measurements. To estimate with a good accuracy the time-varying frequency of a signal corrupted by an additive noise, one method has been chosen. This method consists on the Wigner distribution calculation, the instantaneous frequency is then estimated by the peak location of the distribution. Bias and variance of the estimator are performed by computed simulations. In a estimated sequence of frequencies, we assume the presence of false and good estimated ones, the hypothesis of Gaussian distribution is made on the errors. A robust non linear regression method, based on the Levenberg-Marquardt algorithm, is thus applied on these estimated frequencies using a Maximum Likelihood Estimator. The performances of the method are tested by using varied modulation laws and different signal to noise ratios.

  15. Real-Time Frequency Response Estimation Using Joined-Wing SensorCraft Aeroelastic Wind-Tunnel Data

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A; Heeg, Jennifer; Morelli, Eugene A

    2012-01-01

    A new method is presented for estimating frequency responses and their uncertainties from wind-tunnel data in real time. The method uses orthogonal phase-optimized multi- sine excitation inputs and a recursive Fourier transform with a least-squares estimator. The method was first demonstrated with an F-16 nonlinear flight simulation and results showed that accurate short period frequency responses were obtained within 10 seconds. The method was then applied to wind-tunnel data from a previous aeroelastic test of the Joined- Wing SensorCraft. Frequency responses describing bending strains from simultaneous control surface excitations were estimated in a time-efficient manner.

  16. Estimation of frequency offset in mobile satellite modems

    NASA Technical Reports Server (NTRS)

    Cowley, W. G.; Rice, M.; Mclean, A. N.

    1993-01-01

    In mobilesat applications, frequency offset on the received signal must be estimated and removed prior to further modem processing. A straightforward method of estimating the carrier frequency offset is to raise the received MPSK signal to the M-th power, and then estimate the location of the peak spectral component. An analysis of the lower signal to noise threshold of this method is carried out for BPSK signals. Predicted thresholds are compared to simulation results. It is shown how the method can be extended to pi/M MPSK signals. A real-time implementation of frequency offset estimation for the Australian mobile satellite system is described.

  17. Time-frequency domain SNR estimation and its application in seismic data processing

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Liu, Yang; Li, Xuxuan; Jiang, Nansen

    2014-08-01

    Based on an approach estimating frequency domain signal-to-noise ratio (FSNR), we propose a method to evaluate time-frequency domain signal-to-noise ratio (TFSNR). This method adopts short-time Fourier transform (STFT) to estimate instantaneous power spectrum of signal and noise, and thus uses their ratio to compute TFSNR. Unlike FSNR describing the variation of SNR with frequency only, TFSNR depicts the variation of SNR with time and frequency, and thus better handles non-stationary seismic data. By considering TFSNR, we develop methods to improve the effects of inverse Q filtering and high frequency noise attenuation in seismic data processing. Inverse Q filtering considering TFSNR can better solve the problem of amplitude amplification of noise. The high frequency noise attenuation method considering TFSNR, different from other de-noising methods, distinguishes and suppresses noise using an explicit criterion. Examples of synthetic and real seismic data illustrate the correctness and effectiveness of the proposed methods.

  18. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  19. Developing of method for primary frequency control droop and deadband actual values estimation

    NASA Astrophysics Data System (ADS)

    Nikiforov, A. A.; Chaplin, A. G.

    2017-11-01

    Operation of thermal power plant generation equipment, which participates in standardized primary frequency control (SPFC), must meet specific requirements. These requirements are formalized as nine algorithmic criteria, which are used for automatic monitoring of power plant participation in SPFC. One of these criteria - primary frequency control droop and deadband actual values estimation is considered in detail in this report. Experience shows that existing estimation method sometimes doesn’t work properly. Author offers alternative method, which allows estimating droop and deadband actual values more accurately. This method was implemented as a software application.

  20. Adaptive multitaper time-frequency spectrum estimation

    NASA Astrophysics Data System (ADS)

    Pitton, James W.

    1999-11-01

    In earlier work, Thomson's adaptive multitaper spectrum estimation method was extended to the nonstationary case. This paper reviews the time-frequency multitaper method and the adaptive procedure, and explores some properties of the eigenvalues and eigenvectors. The variance of the adaptive estimator is used to construct an adaptive smoother, which is used to form a high resolution estimate. An F-test for detecting and removing sinusoidal components in the time-frequency spectrum is also given.

  1. A novel joint timing/frequency synchronization scheme based on Radon-Wigner transform of LFM signals in CO-OFDM systems

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Wei, Ying; Zeng, Xiangye; Lu, Jia; Zhang, Shuangxi; Wang, Mengjun

    2018-03-01

    A joint timing and frequency synchronization method has been proposed for coherent optical orthogonal frequency-division multiplexing (CO-OFDM) system in this paper. The timing offset (TO), integer frequency offset (FO) and the fractional FO can be realized by only one training symbol, which consists of two linear frequency modulation (LFM) signals with opposite chirp rates. By detecting the peak of LFM signals after Radon-Wigner transform (RWT), the TO and the integer FO can be estimated at the same time, moreover, the fractional FO can be acquired correspondingly through the self-correlation characteristic of the same training symbol. Simulation results show that the proposed method can give a more accurate TO estimation than the existing methods, especially at poor OSNR conditions; for the FO estimation, both the fractional and the integer FO can be estimated through the proposed training symbol with no extra overhead, a more accurate estimation and a large FO estimation range of [ - 5 GHz, 5GHz] can be acquired.

  2. Impedance-estimation methods, modeling methods, articles of manufacture, impedance-modeling devices, and estimated-impedance monitoring systems

    DOEpatents

    Richardson, John G [Idaho Falls, ID

    2009-11-17

    An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.

  3. Evaluation of Piloted Inputs for Onboard Frequency Response Estimation

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Martos, Borja

    2013-01-01

    Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.

  4. Fatigue level estimation of monetary bills based on frequency band acoustic signals with feature selection by supervised SOM

    NASA Astrophysics Data System (ADS)

    Teranishi, Masaru; Omatu, Sigeru; Kosaka, Toshihisa

    Fatigued monetary bills adversely affect the daily operation of automated teller machines (ATMs). In order to make the classification of fatigued bills more efficient, the development of an automatic fatigued monetary bill classification method is desirable. We propose a new method by which to estimate the fatigue level of monetary bills from the feature-selected frequency band acoustic energy pattern of banking machines. By using a supervised self-organizing map (SOM), we effectively estimate the fatigue level using only the feature-selected frequency band acoustic energy pattern. Furthermore, the feature-selected frequency band acoustic energy pattern improves the estimation accuracy of the fatigue level of monetary bills by adding frequency domain information to the acoustic energy pattern. The experimental results with real monetary bill samples reveal the effectiveness of the proposed method.

  5. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  6. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  7. A dynamical systems approach for estimating phase interactions between rhythms of different frequencies from experimental data.

    PubMed

    Onojima, Takayuki; Goto, Takahiro; Mizuhara, Hiroaki; Aoyagi, Toshio

    2018-01-01

    Synchronization of neural oscillations as a mechanism of brain function is attracting increasing attention. Neural oscillation is a rhythmic neural activity that can be easily observed by noninvasive electroencephalography (EEG). Neural oscillations show the same frequency and cross-frequency synchronization for various cognitive and perceptual functions. However, it is unclear how this neural synchronization is achieved by a dynamical system. If neural oscillations are weakly coupled oscillators, the dynamics of neural synchronization can be described theoretically using a phase oscillator model. We propose an estimation method to identify the phase oscillator model from real data of cross-frequency synchronized activities. The proposed method can estimate the coupling function governing the properties of synchronization. Furthermore, we examine the reliability of the proposed method using time-series data obtained from numerical simulation and an electronic circuit experiment, and show that our method can estimate the coupling function correctly. Finally, we estimate the coupling function between EEG oscillation and the speech sound envelope, and discuss the validity of these results.

  8. A straightforward frequency-estimation technique for GPS carrier-phase time transfer.

    PubMed

    Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen

    2006-09-01

    Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).

  9. Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system

    NASA Astrophysics Data System (ADS)

    Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye

    2017-12-01

    In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.

  10. An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars.

    PubMed

    Huang, Jiyan; Zhang, Ying; Luo, Shan

    2017-12-15

    Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The simulation results verified the proposed method.

  11. An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars

    PubMed Central

    Zhang, Ying; Luo, Shan

    2017-01-01

    Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer–Rao lower bound (CRLB) are derived. The simulation results verified the proposed method. PMID:29244727

  12. Precise Estimation of Allele Frequencies of Single-Nucleotide Polymorphisms by a Quantitative SSCP Analysis of Pooled DNA

    PubMed Central

    Sasaki, Tomonari; Tahira, Tomoko; Suzuki, Akari; Higasa, Koichiro; Kukita, Yoji; Baba, Shingo; Hayashi, Kenshi

    2001-01-01

    We show that single-nucleotide polymorphisms (SNPs) of moderate to high heterozygosity (minor allele frequencies >10%) can be efficiently detected, and their allele frequencies accurately estimated, by pooling the DNA samples and applying a capillary-based SSCP analysis. In this method, alleles are separated into peaks, and their frequencies can be reliably and accurately quantified from their peak heights (SD <1.8%). We found that as many as 40% of publicly available SNPs that were analyzed by this method have widely differing allele frequency distributions among groups of different ethnicity (parents of Centre d'Etude Polymorphisme Humaine families vs. Japanese individuals). These results demonstrate the effectiveness of the present pooling method in the reevaluation of candidate SNPs that have been collected by examination of limited numbers of individuals. The method should also serve as a robust quantitative technique for studies in which a precise estimate of SNP allele frequencies is essential—for example, in linkage disequilibrium analysis. PMID:11083945

  13. Rapid estimation of frequency response functions by close-range photogrammetry

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1985-01-01

    The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.

  14. Statistical plant set estimation using Schroeder-phased multisinusoidal input design

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.

    1992-01-01

    A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.

  15. Transmission overhaul and replacement predictions using Weibull and renewel theory

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1989-01-01

    A method to estimate the frequency of transmission overhauls is presented. This method is based on the two-parameter Weibull statistical distribution for component life. A second method is presented to estimate the number of replacement components needed to support the transmission overhaul pattern. The second method is based on renewal theory. Confidence statistics are applied with both methods to improve the statistical estimate of sample behavior. A transmission example is also presented to illustrate the use of the methods. Transmission overhaul frequency and component replacement calculations are included in the example.

  16. Motion estimation in the frequency domain using fuzzy c-planes clustering.

    PubMed

    Erdem, C E; Karabulut, G Z; Yanmaz, E; Anarim, E

    2001-01-01

    A recent work explicitly models the discontinuous motion estimation problem in the frequency domain where the motion parameters are estimated using a harmonic retrieval approach. The vertical and horizontal components of the motion are independently estimated from the locations of the peaks of respective periodogram analyses and they are paired to obtain the motion vectors using a procedure proposed. In this paper, we present a more efficient method that replaces the motion component pairing task and hence eliminates the problems of the pairing method described. The method described in this paper uses the fuzzy c-planes (FCP) clustering approach to fit planes to three-dimensional (3-D) frequency domain data obtained from the peaks of the periodograms. Experimental results are provided to demonstrate the effectiveness of the proposed method.

  17. A new method for predicting response in complex linear systems. II. [under random or deterministic steady state excitation

    NASA Technical Reports Server (NTRS)

    Bogdanoff, J. L.; Kayser, K.; Krieger, W.

    1977-01-01

    The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.

  18. Estimation of the auto frequency response function at unexcited points using dummy masses

    NASA Astrophysics Data System (ADS)

    Hosoya, Naoki; Yaginuma, Shinji; Onodera, Hiroshi; Yoshimura, Takuya

    2015-02-01

    If structures with complex shapes have space limitations, vibration tests using an exciter or impact hammer for the excitation are difficult. Although measuring the auto frequency response function at an unexcited point may not be practical via a vibration test, it can be obtained by assuming that the inertia acting on a dummy mass is an external force on the target structure upon exciting a different excitation point. We propose a method to estimate the auto frequency response functions at unexcited points by attaching a small mass (dummy mass), which is comparable to the accelerometer mass. The validity of the proposed method is demonstrated by comparing the auto frequency response functions estimated at unexcited points in a beam structure to those obtained from numerical simulations. We also consider random measurement errors by finite element analysis and vibration tests, but not bias errors. Additionally, the applicability of the proposed method is demonstrated by applying it to estimate the auto frequency response function of the lower arm in a car suspension.

  19. Improved analysis of ground vibrations produced by man-made sources.

    PubMed

    Ainalis, Daniel; Ducarne, Loïc; Kaufmann, Olivier; Tshibangu, Jean-Pierre; Verlinden, Olivier; Kouroussis, Georges

    2018-03-01

    Man-made sources of ground vibration must be carefully monitored in urban areas in order to ensure that structural damage and discomfort to residents is prevented or minimised. The research presented in this paper provides a comparative evaluation of various methods used to analyse a series of tri-axial ground vibration measurements generated by rail, road, and explosive blasting. The first part of the study is focused on comparing various techniques to estimate the dominant frequency, including time-frequency analysis. The comparative evaluation of the various methods to estimate the dominant frequency revealed that, depending on the method used, there can be significant variation in the estimates obtained. A new and improved analysis approach using the continuous wavelet transform was also presented, using the time-frequency distribution to estimate the localised dominant frequency and peak particle velocity. The technique can be used to accurately identify the level and frequency content of a ground vibration signal as it varies with time, and identify the number of times the threshold limits of damage are exceeded. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Rainfall Measurement with a Ground Based Dual Frequency Radar

    NASA Technical Reports Server (NTRS)

    Takahashi, Nobuhiro; Horie, Hiroaki; Meneghini, Robert

    1997-01-01

    Dual frequency methods are one of the most useful ways to estimate precise rainfall rates. However, there are some difficulties in applying this method to ground based radars because of the existence of a blind zone and possible error in the radar calibration. Because of these problems, supplemental observations such as rain gauges or satellite link estimates of path integrated attenuation (PIA) are needed. This study shows how to estimate rainfall rate with a ground based dual frequency radar with rain gauge and satellite link data. Applications of this method to stratiform rainfall is also shown. This method is compared with single wavelength method. Data were obtained from a dual frequency (10 GHz and 35 GHz) multiparameter radar radiometer built by the Communications Research Laboratory (CRL), Japan, and located at NASA/GSFC during the spring of 1997. Optical rain gauge (ORG) data and broadcasting satellite signal data near the radar t location were also utilized for the calculation.

  1. A dynamical systems approach for estimating phase interactions between rhythms of different frequencies from experimental data

    PubMed Central

    Goto, Takahiro; Aoyagi, Toshio

    2018-01-01

    Synchronization of neural oscillations as a mechanism of brain function is attracting increasing attention. Neural oscillation is a rhythmic neural activity that can be easily observed by noninvasive electroencephalography (EEG). Neural oscillations show the same frequency and cross-frequency synchronization for various cognitive and perceptual functions. However, it is unclear how this neural synchronization is achieved by a dynamical system. If neural oscillations are weakly coupled oscillators, the dynamics of neural synchronization can be described theoretically using a phase oscillator model. We propose an estimation method to identify the phase oscillator model from real data of cross-frequency synchronized activities. The proposed method can estimate the coupling function governing the properties of synchronization. Furthermore, we examine the reliability of the proposed method using time-series data obtained from numerical simulation and an electronic circuit experiment, and show that our method can estimate the coupling function correctly. Finally, we estimate the coupling function between EEG oscillation and the speech sound envelope, and discuss the validity of these results. PMID:29337999

  2. Estimating the vibration level of an L-shaped beam using power flow techniques

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.

    1986-01-01

    The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.

  3. Estimation of chirp rates of music-adapted prolate spheroidal atoms using reassignment

    NASA Astrophysics Data System (ADS)

    Mesz, Bruno; Serrano, Eduardo

    2007-09-01

    We introduce a modified Matching Pursuit algorithm for estimating frequency and frequency slope of FM-modulated music signals. The use of Matching Pursuit with constant frequency atoms provides coarse estimates which could be improved with chirped atoms, more suited in principle to this kind of signals. Application of the reassignment method is suggested by its good localization properties for chirps. We start considering a family of atoms generated by modulation and scaling of a prolate spheroidal wave function. These functions are concentrated in frequency on intervals of a semitone centered at the frequencies of the well-tempered scale. At each stage of the pursuit, we search the atom most correlated with the signal. We then consider the spectral peaks at each frame of the spectrogram and calculate a modified frequency and frequency slope using the derivatives of the reassignment operators; this is then used to estimate the parameters of a cubic interpolation polynomial that models local pitch fluctuations. We apply the method both to synthetic and music signals.

  4. A maximum likelihood algorithm for genome mapping of cytogenetic loci from meiotic configuration data.

    PubMed Central

    Reyes-Valdés, M H; Stelly, D M

    1995-01-01

    Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226

  5. Uncertainty of streamwater solute fluxes in five contrasting headwater catchments including model uncertainty and natural variability (Invited)

    NASA Astrophysics Data System (ADS)

    Aulenbach, B. T.; Burns, D. A.; Shanley, J. B.; Yanai, R. D.; Bae, K.; Wild, A.; Yang, Y.; Dong, Y.

    2013-12-01

    There are many sources of uncertainty in estimates of streamwater solute flux. Flux is the product of discharge and concentration (summed over time), each of which has measurement uncertainty of its own. Discharge can be measured almost continuously, but concentrations are usually determined from discrete samples, which increases uncertainty dependent on sampling frequency and how concentrations are assigned for the periods between samples. Gaps between samples can be estimated by linear interpolation or by models that that use the relations between concentration and continuously measured or known variables such as discharge, season, temperature, and time. For this project, developed in cooperation with QUEST (Quantifying Uncertainty in Ecosystem Studies), we evaluated uncertainty for three flux estimation methods and three different sampling frequencies (monthly, weekly, and weekly plus event). The constituents investigated were dissolved NO3, Si, SO4, and dissolved organic carbon (DOC), solutes whose concentration dynamics exhibit strongly contrasting behavior. The evaluation was completed for a 10-year period at five small, forested watersheds in Georgia, New Hampshire, New York, Puerto Rico, and Vermont. Concentration regression models were developed for each solute at each of the three sampling frequencies for all five watersheds. Fluxes were then calculated using (1) a linear interpolation approach, (2) a regression-model method, and (3) the composite method - which combines the regression-model method for estimating concentrations and the linear interpolation method for correcting model residuals to the observed sample concentrations. We considered the best estimates of flux to be derived using the composite method at the highest sampling frequencies. We also evaluated the importance of sampling frequency and estimation method on flux estimate uncertainty; flux uncertainty was dependent on the variability characteristics of each solute and varied for different reporting periods (e.g. 10-year, study period vs. annually vs. monthly). The usefulness of the two regression model based flux estimation approaches was dependent upon the amount of variance in concentrations the regression models could explain. Our results can guide the development of optimal sampling strategies by weighing sampling frequency with improvements in uncertainty in stream flux estimates for solutes with particular characteristics of variability. The appropriate flux estimation method is dependent on a combination of sampling frequency and the strength of concentration regression models. Sites: Biscuit Brook (Frost Valley, NY), Hubbard Brook Experimental Forest and LTER (West Thornton, NH), Luquillo Experimental Forest and LTER (Luquillo, Puerto Rico), Panola Mountain (Stockbridge, GA), Sleepers River Research Watershed (Danville, VT)

  6. A two-step parameter optimization algorithm for improving estimation of optical properties using spatial frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Hu, Dong; Lu, Renfu; Ying, Yibin

    2018-03-01

    This research was aimed at optimizing the inverse algorithm for estimating the optical absorption (μa) and reduced scattering (μs‧) coefficients from spatial frequency domain diffuse reflectance. Studies were first conducted to determine the optimal frequency resolution and start and end frequencies in terms of the reciprocal of mean free path (1/mfp‧). The results showed that the optimal frequency resolution increased with μs‧ and remained stable when μs‧ was larger than 2 mm-1. The optimal end frequency decreased from 0.3/mfp‧ to 0.16/mfp‧ with μs‧ ranging from 0.4 mm-1 to 3 mm-1, while the optimal start frequency remained at 0 mm-1. A two-step parameter estimation method was proposed based on the optimized frequency parameters, which improved estimation accuracies by 37.5% and 9.8% for μa and μs‧, respectively, compared with the conventional one-step method. Experimental validations with seven liquid optical phantoms showed that the optimized algorithm resulted in the mean absolute errors of 15.4%, 7.6%, 5.0% for μa and 16.4%, 18.0%, 18.3% for μs‧ at the wavelengths of 675 nm, 700 nm, and 715 nm, respectively. Hence, implementation of the optimized parameter estimation method should be considered in order to improve the measurement of optical properties of biological materials when using spatial frequency domain imaging technique.

  7. An Initial Assessment of the Surface Reference Technique Applied to Data from the Dual-Frequency Precipitation Radar (DPR) on the GPM Satellite

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Kim, Hyokyung; Liao, Liang; Jones, Jeffrey A.; Kwiatkowski, John M.

    2015-01-01

    It has long been recognized that path-integrated attenuation (PIA) can be used to improve precipitation estimates from high-frequency weather radar data. One approach that provides an estimate of this quantity from airborne or spaceborne radar data is the surface reference technique (SRT), which uses measurements of the surface cross section in the presence and absence of precipitation. Measurements from the dual-frequency precipitation radar (DPR) on the Global Precipitation Measurement (GPM) satellite afford the first opportunity to test the method for spaceborne radar data at Ka band as well as for the Ku-band-Ka-band combination. The study begins by reviewing the basis of the single- and dual-frequency SRT. As the performance of the method is closely tied to the behavior of the normalized radar cross section (NRCS or sigma(0)) of the surface, the statistics of sigma(0) derived from DPR measurements are given as a function of incidence angle and frequency for ocean and land backgrounds over a 1-month period. Several independent estimates of the PIA, formed by means of different surface reference datasets, can be used to test the consistency of the method since, in the absence of error, the estimates should be identical. Along with theoretical considerations, the comparisons provide an initial assessment of the performance of the single- and dual-frequency SRT for the DPR. The study finds that the dual-frequency SRT can provide improvement in the accuracy of path attenuation estimates relative to the single-frequency method, particularly at Ku band.

  8. Use of the Method of Triads in the Validation of Sodium and Potassium Intake in the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil)

    PubMed Central

    Pereira, Taísa Sabrina Silva; Cade, Nágela Valadão; Mill, José Geraldo; Sichieri, Rosely; Molina, Maria del Carmen Bisi

    2016-01-01

    Introduction Biomarkers are a good choice to be used in the validation of food frequency questionnaire due to the independence of their random errors. Objective To assess the validity of the potassium and sodium intake estimated using the Food Frequency Questionnaire ELSA-Brasil. Subjects/Methods A subsample of participants in the ELSA-Brasil cohort was included in this study in 2009. Sodium and potassium intake were estimated using three methods: Semi-quantitative food frequency questionnaire, 12-hour nocturnal urinary excretion and three 24-hour food records. Correlation coefficients were calculated between the methods, and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficient were estimated using bootstrap sampling. Exact and adjacent agreement and disagreement of the estimated sodium and potassium intake quintiles were compared among three methods. Results The sample consisted of 246 participants, aged 53±8 years, 52% of women. Validity coefficient for sodium were considered weak (рfood frequency questionnaire actual intake = 0.37 and рbiomarker actual intake = 0.21) and moderate (рfood records actual intake 0.56). The validity coefficient were higher for potassium (рfood frequency questionnaire actual intake = 0.60; рbiomarker actual intake = 0.42; рfood records actual intake = 0.79). Conclusions: The Food Frequency Questionnaire ELSA-Brasil showed good validity in estimating potassium intake in epidemiological studies. For sodium validity was weak, likely due to the non-quantification of the added salt to prepared food. PMID:28030625

  9. Statistical comparison of methods for estimating sediment thickness from Horizontal-to-Vertical Spectral Ratio (HVSR) seismic methods: An example from Tylerville, Connecticut, USA

    USGS Publications Warehouse

    Johnson, Carole D.; Lane, John W.

    2016-01-01

    Determining sediment thickness and delineating bedrock topography are important for assessing groundwater availability and characterizing contamination sites. In recent years, the horizontal-to-vertical spectral ratio (HVSR) seismic method has emerged as a non-invasive, cost-effective approach for estimating the thickness of unconsolidated sediments above bedrock. Using a three-component seismometer, this method uses the ratio of the average horizontal- and vertical-component amplitude spectrums to produce a spectral ratio curve with a peak at the fundamental resonance frequency. The HVSR method produces clear and repeatable resonance frequency peaks when there is a sharp contrast (>2:1) in acoustic impedance at the sediment/bedrock boundary. Given the resonant frequency, sediment thickness can be determined either by (1) using an estimate of average local sediment shear-wave velocity or by (2) application of a power-law regression equation developed from resonance frequency observations at sites with a range of known depths to bedrock. Two frequently asked questions about the HVSR method are (1) how accurate are the sediment thickness estimates? and (2) how much do sediment thickness/bedrock depth estimates change when using different published regression equations? This paper compares and contrasts different approaches for generating HVSR depth estimates, through analysis of HVSR data acquired in the vicinity of Tylerville, Connecticut, USA.

  10. Vast Volatility Matrix Estimation using High Frequency Data for Portfolio Selection*

    PubMed Central

    Fan, Jianqing; Li, Yingying; Yu, Ke

    2012-01-01

    Portfolio allocation with gross-exposure constraint is an effective method to increase the efficiency and stability of portfolios selection among a vast pool of assets, as demonstrated in Fan et al. (2011). The required high-dimensional volatility matrix can be estimated by using high frequency financial data. This enables us to better adapt to the local volatilities and local correlations among vast number of assets and to increase significantly the sample size for estimating the volatility matrix. This paper studies the volatility matrix estimation using high-dimensional high-frequency data from the perspective of portfolio selection. Specifically, we propose the use of “pairwise-refresh time” and “all-refresh time” methods based on the concept of “refresh time” proposed by Barndorff-Nielsen et al. (2008) for estimation of vast covariance matrix and compare their merits in the portfolio selection. We establish the concentration inequalities of the estimates, which guarantee desirable properties of the estimated volatility matrix in vast asset allocation with gross exposure constraints. Extensive numerical studies are made via carefully designed simulations. Comparing with the methods based on low frequency daily data, our methods can capture the most recent trend of the time varying volatility and correlation, hence provide more accurate guidance for the portfolio allocation in the next time period. The advantage of using high-frequency data is significant in our simulation and empirical studies, which consist of 50 simulated assets and 30 constituent stocks of Dow Jones Industrial Average index. PMID:23264708

  11. Motion estimation using point cluster method and Kalman filter.

    PubMed

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal instantaneous frequencies.

  12. cloncase: Estimation of sex frequency and effective population size by clonemate resampling in partially clonal organisms.

    PubMed

    Ali, Sajid; Soubeyrand, Samuel; Gladieux, Pierre; Giraud, Tatiana; Leconte, Marc; Gautier, Angélique; Mboup, Mamadou; Chen, Wanquan; de Vallavieille-Pope, Claude; Enjalbert, Jérôme

    2016-07-01

    Inferring reproductive and demographic parameters of populations is crucial to our understanding of species ecology and evolutionary potential but can be challenging, especially in partially clonal organisms. Here, we describe a new and accurate method, cloncase, for estimating both the rate of sexual vs. asexual reproduction and the effective population size, based on the frequency of clonemate resampling across generations. Simulations showed that our method provides reliable estimates of sex frequency and effective population size for a wide range of parameters. The cloncase method was applied to Puccinia striiformis f.sp. tritici, a fungal pathogen causing stripe/yellow rust, an important wheat disease. This fungus is highly clonal in Europe but has been suggested to recombine in Asia. Using two temporally spaced samples of P. striiformis f.sp. tritici in China, the estimated sex frequency was 75% (i.e. three-quarter of individuals being sexually derived during the yearly sexual cycle), indicating strong contribution of sexual reproduction to the life cycle of the pathogen in this area. The inferred effective population size of this partially clonal organism (Nc  = 998) was in good agreement with estimates obtained using methods based on temporal variations in allelic frequencies. The cloncase estimator presented herein is the first method allowing accurate inference of both sex frequency and effective population size from population data without knowledge of recombination or mutation rates. cloncase can be applied to population genetic data from any organism with cyclical parthenogenesis and should in particular be very useful for improving our understanding of pest and microbial population biology. © 2016 John Wiley & Sons Ltd.

  13. Cover/Frequency (CF)

    Treesearch

    John F. Caratti

    2006-01-01

    The FIREMON Cover/Frequency (CF) method is used to assess changes in plant species cover and frequency for a macroplot. This method uses multiple quadrats to sample within-plot variation and quantify statistically valid changes in plant species cover, height, and frequency over time. Because it is difficult to estimate cover in quadrats for larger plants, this method...

  14. Fast focus estimation using frequency analysis in digital holography.

    PubMed

    Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung

    2014-11-17

    A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.

  15. Influence of sampling frequency and load calculation methods on quantification of annual river nutrient and suspended solids loads.

    PubMed

    Elwan, Ahmed; Singh, Ranvir; Patterson, Maree; Roygard, Jon; Horne, Dave; Clothier, Brent; Jones, Geoffrey

    2018-01-11

    Better management of water quality in streams, rivers and lakes requires precise and accurate estimates of different contaminant loads. We assessed four sampling frequencies (2 days, weekly, fortnightly and monthly) and five load calculation methods (global mean (GM), rating curve (RC), ratio estimator (RE), flow-stratified (FS) and flow-weighted (FW)) to quantify loads of nitrate-nitrogen (NO 3 - -N), soluble inorganic nitrogen (SIN), total nitrogen (TN), dissolved reactive phosphorus (DRP), total phosphorus (TP) and total suspended solids (TSS), in the Manawatu River, New Zealand. The estimated annual river loads were compared to the reference 'true' loads, calculated using daily measurements of flow and water quality from May 2010 to April 2011, to quantify bias (i.e. accuracy) and root mean square error 'RMSE' (i.e. accuracy and precision). The GM method resulted into relatively higher RMSE values and a consistent negative bias (i.e. underestimation) in estimates of annual river loads across all sampling frequencies. The RC method resulted in the lowest RMSE for TN, TP and TSS at monthly sampling frequency. Yet, RC highly overestimated the loads for parameters that showed dilution effect such as NO 3 - -N and SIN. The FW and RE methods gave similar results, and there was no essential improvement in using RE over FW. In general, FW and RE performed better than FS in terms of bias, but FS performed slightly better than FW and RE in terms of RMSE for most of the water quality parameters (DRP, TP, TN and TSS) using a monthly sampling frequency. We found no significant decrease in RMSE values for estimates of NO 3 - N, SIN, TN and DRP loads when the sampling frequency was increased from monthly to fortnightly. The bias and RMSE values in estimates of TP and TSS loads (estimated by FW, RE and FS), however, showed a significant decrease in the case of weekly or 2-day sampling. This suggests potential for a higher sampling frequency during flow peaks for more precise and accurate estimates of annual river loads for TP and TSS, in the study river and other similar conditions.

  16. Use of the Method of Triads in the Validation of Sodium and Potassium Intake in the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil).

    PubMed

    Pereira, Taísa Sabrina Silva; Cade, Nágela Valadão; Mill, José Geraldo; Sichieri, Rosely; Molina, Maria Del Carmen Bisi

    2016-01-01

    Biomarkers are a good choice to be used in the validation of food frequency questionnaire due to the independence of their random errors. To assess the validity of the potassium and sodium intake estimated using the Food Frequency Questionnaire ELSA-Brasil. A subsample of participants in the ELSA-Brasil cohort was included in this study in 2009. Sodium and potassium intake were estimated using three methods: Semi-quantitative food frequency questionnaire, 12-hour nocturnal urinary excretion and three 24-hour food records. Correlation coefficients were calculated between the methods, and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficient were estimated using bootstrap sampling. Exact and adjacent agreement and disagreement of the estimated sodium and potassium intake quintiles were compared among three methods. The sample consisted of 246 participants, aged 53±8 years, 52% of women. Validity coefficient for sodium were considered weak (рfood frequency questionnaire actual intake = 0.37 and рbiomarker actual intake = 0.21) and moderate (рfood records actual intake 0.56). The validity coefficient were higher for potassium (рfood frequency questionnaire actual intake = 0.60; рbiomarker actual intake = 0.42; рfood records actual intake = 0.79). Conclusions: The Food Frequency Questionnaire ELSA-Brasil showed good validity in estimating potassium intake in epidemiological studies. For sodium validity was weak, likely due to the non-quantification of the added salt to prepared food.

  17. Online frequency estimation with applications to engine and generator sets

    NASA Astrophysics Data System (ADS)

    Manngård, Mikael; Böling, Jari M.

    2017-07-01

    Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.

  18. Identification of modal parameters including unmeasured forces and transient effects

    NASA Astrophysics Data System (ADS)

    Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.

    2003-08-01

    In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.

  19. Estimating the magnitude of peak flows at selected recurrence intervals for streams in Idaho

    USGS Publications Warehouse

    Berenbrock, Charles

    2002-01-01

    The region-of-influence method is not recommended for use in determining flood-frequency estimates for ungaged sites in Idaho because the results, overall, are less accurate and the calculations are more complex than those of regional regression equations. The regional regression equations were considered to be the primary method of estimating the magnitude and frequency of peak flows for ungaged sites in Idaho.

  20. Online Detection of Broken Rotor Bar Fault in Induction Motors by Combining Estimation of Signal Parameters via Min-norm Algorithm and Least Square Method

    NASA Astrophysics Data System (ADS)

    Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin

    2017-11-01

    Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.

  1. Direct system parameter identification of mechanical structures with application to modal analysis

    NASA Technical Reports Server (NTRS)

    Leuridan, J. M.; Brown, D. L.; Allemang, R. J.

    1982-01-01

    In this paper a method is described to estimate mechanical structure characteristics in terms of mass, stiffness and damping matrices using measured force input and response data. The estimated matrices can be used to calculate a consistent set of damped natural frequencies and damping values, mode shapes and modal scale factors for the structure. The proposed technique is attractive as an experimental modal analysis method since the estimation of the matrices does not require previous estimation of frequency responses and since the method can be used, without any additional complications, for multiple force input structure testing.

  2. Identification of site frequencies from building records

    USGS Publications Warehouse

    Celebi, M.

    2003-01-01

    A simple procedure to identify site frequencies using earthquake response records from roofs and basements of buildings is presented. For this purpose, data from five different buildings are analyzed using only spectral analyses techniques. Additional data such as free-field records in close proximity to the buildings and site characterization data are also used to estimate site frequencies and thereby to provide convincing evidence and confirmation of the site frequencies inferred from the building records. Furthermore, simple code-formula is used to calculate site frequencies and compare them with the identified site frequencies from records. Results show that the simple procedure is effective in identification of site frequencies and provides relatively reliable estimates of site frequencies when compared with other methods. Therefore the simple procedure for estimating site frequencies using earthquake records can be useful in adding to the database of site frequencies. Such databases can be used to better estimate site frequencies of those sites with similar geological structures.

  3. Using Internet search engines to estimate word frequency.

    PubMed

    Blair, Irene V; Urland, Geoffrey R; Ma, Jennifer E

    2002-05-01

    The present research investigated Internet search engines as a rapid, cost-effective alternative for estimating word frequencies. Frequency estimates for 382 words were obtained and compared across four methods: (1) Internet search engines, (2) the Kucera and Francis (1967) analysis of a traditional linguistic corpus, (3) the CELEX English linguistic database (Baayen, Piepenbrock, & Gulikers, 1995), and (4) participant ratings of familiarity. The results showed that Internet search engines produced frequency estimates that were highly consistent with those reported by Kucera and Francis and those calculated from CELEX, highly consistent across search engines, and very reliable over a 6-month period of time. Additional results suggested that Internet search engines are an excellent option when traditional word frequency analyses do not contain the necessary data (e.g., estimates for forenames and slang). In contrast, participants' familiarity judgments did not correspond well with the more objective estimates of word frequency. Researchers are advised to use search engines with large databases (e.g., AltaVista) to ensure the greatest representativeness of the frequency estimates.

  4. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  5. Methods for estimating the magnitude and frequency of floods for urban and small, rural streams in Georgia, South Carolina, and North Carolina, 2011.

    DOT National Transportation Integrated Search

    2014-03-01

    The central purpose of this report is to present methods : for estimating the magnitude and frequency of floods on : urban and small, rural streams in the Southeast United States : with particular focus on Georgia, South Carolina, and North : Carolin...

  6. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    NASA Astrophysics Data System (ADS)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  7. Spectrum response estimation for deep-water floating platforms via retardation function representation

    NASA Astrophysics Data System (ADS)

    Liu, Fushun; Liu, Chengcheng; Chen, Jiefeng; Wang, Bin

    2017-08-01

    The key concept of spectrum response estimation with commercial software, such as the SESAM software tool, typically includes two main steps: finding a suitable loading spectrum and computing the response amplitude operators (RAOs) subjected to a frequency-specified wave component. In this paper, we propose a nontraditional spectrum response estimation method that uses a numerical representation of the retardation functions. Based on estimated added mass and damping matrices of the structure, we decompose and replace the convolution terms with a series of poles and corresponding residues in the Laplace domain. Then, we estimate the power density corresponding to each frequency component using the improved periodogram method. The advantage of this approach is that the frequency-dependent motion equations in the time domain can be transformed into the Laplace domain without requiring Laplace-domain expressions for the added mass and damping. To validate the proposed method, we use a numerical semi-submerged pontoon from the SESAM. The numerical results show that the responses of the proposed method match well with those obtained from the traditional method. Furthermore, the estimated spectrum also matches well, which indicates its potential application to deep-water floating structures.

  8. Reinforcing flood-risk estimation.

    PubMed

    Reed, Duncan W

    2002-07-15

    Flood-frequency estimation is inherently uncertain. The practitioner applies a combination of gauged data, scientific method and hydrological judgement to derive a flood-frequency curve for a particular site. The resulting estimate can be thought fully satisfactory only if it is broadly consistent with all that is reliably known about the flood-frequency behaviour of the river. The paper takes as its main theme the search for information to strengthen a flood-risk estimate made from peak flows alone. Extra information comes in many forms, including documentary and monumental records of historical floods, and palaeological markers. Meteorological information is also useful, although rainfall rarity is difficult to assess objectively and can be a notoriously unreliable indicator of flood rarity. On highly permeable catchments, groundwater levels present additional data. Other types of information are relevant to judging hydrological similarity when the flood-frequency estimate derives from data pooled across several catchments. After highlighting information sources, the paper explores a second theme: that of consistency in flood-risk estimates. Following publication of the Flood estimation handbook, studies of flood risk are now using digital catchment data. Automated calculation methods allow estimates by standard methods to be mapped basin-wide, revealing anomalies at special sites such as river confluences. Such mapping presents collateral information of a new character. Can this be used to achieve flood-risk estimates that are coherent throughout a river basin?

  9. Method of remotely estimating a rest or best lock frequency of a local station receiver using telemetry

    NASA Technical Reports Server (NTRS)

    Fielhauer, Karl B. (Inventor); Jensen, James R. (Inventor)

    2007-01-01

    A system includes a remote station and a local station having a receiver. The receiver operates in an unlocked state corresponding to its best lock frequency (BLF). The local station derives data indicative of a ratio of the BLF to a reference frequency of the receiver, and telemeters the data to the remote station. The remote station estimates the BLF based on (i) the telemetered data, and (ii) a predetermined estimate of the reference frequency.

  10. Regularization of Instantaneous Frequency Attribute Computations

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  11. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  12. An estimator for the standard deviation of a natural frequency. II.

    NASA Technical Reports Server (NTRS)

    Schiff, A. J.; Bogdanoff, J. L.

    1971-01-01

    A method has been presented for estimating the variability of a system's natural frequencies arising from the variability of the system's parameters. The only information required to obtain the estimates is the member variability, in the form of second-order properties, and the natural frequencies and mode shapes of the mean system. It has also been established for the systems studied by means of Monte Carlo estimates that the specification of second-order properties is an adequate description of member variability.

  13. Chirplet Wigner-Ville distribution for time-frequency representation and its application

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chen, J.; Dong, G. M.

    2013-12-01

    This paper presents a Chirplet Wigner-Ville Distribution (CWVD) that is free for cross-term that usually occurs in Wigner-Ville distribution (WVD). By transforming the signal with frequency rotating operators, several mono-frequency signals without intermittent are obtained, WVD is applied to the rotated signals that is cross-term free, then some frequency shift operators corresponding to the rotating operator are utilized to relocate the signal‧s instantaneous frequencies (IFs). The operators‧ parameters come from the estimation of the IFs which are approached with a polynomial functions or spline functions. What is more, by analysis of error, the main factors for the performance of the novel method have been discovered and an effective signal extending method based on the IFs estimation has been developed to improve the energy concentration of WVD. The excellent performance of the novel method was manifested by applying it to estimate the IFs of some numerical signals and the echolocation signal emitted by the Large Brown Bat.

  14. Application of a tri-axial accelerometer to estimate jump frequency in volleyball.

    PubMed

    Jarning, Jon M; Mok, Kam-Ming; Hansen, Bjørge H; Bahr, Roald

    2015-03-01

    Patellar tendinopathy is prevalent among athletes, and most likely associated with a high jumping load. If methods for estimating jump frequency were available, this could potentially assist in understanding and preventing this condition. The objective of this study was to explore the possibility of using peak vertical acceleration (PVA) or peak resultant acceleration (PRA) measured by an accelerometer to estimate jump frequency. Twelve male elite volleyball players (22.5 ± 1.6 yrs) performed a training protocol consisting of seven typical motion patterns, including jumping and non-jumping movements. Accelerometer data from the trial were obtained using a tri-axial accelerometer. In addition, we collected video data from the trial. Jump-float serving and spike jumping could not be distinguished from non-jumping movements using differences in PVA or PRA. Furthermore, there were substantial inter-participant differences in both the PVA and the PRA within and across movement types (p < 0.05). These findings suggest that neither PVA nor PRA measured by a tri-axial accelerometer is an applicable method for estimating jump frequency in volleyball. A method for acquiring real-time estimates of jump frequency remains to be verified. However, there are several alternative approaches, and further investigations are needed.

  15. Principal axes estimation using the vibration modes of physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2008-06-01

    This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.

  16. Radar attenuation tomography using the centroid frequency downshift method

    USGS Publications Warehouse

    Liu, L.; Lane, J.W.; Quan, Y.

    1998-01-01

    A method for tomographically estimating electromagnetic (EM) wave attenuation based on analysis of centroid frequency downshift (CFDS) of impulse radar signals is described and applied to cross-hole radar data. The method is based on a constant-Q model, which assumes a linear frequency dependence of attenuation for EM wave propagation above the transition frequency. The method uses the CFDS to construct the projection function. In comparison with other methods for estimating attenuation, the CFDS method is relatively insensitive to the effects of geometric spreading, instrument response, and antenna coupling and radiation pattern, but requires the data to be broadband so that the frequency shift and variance can be easily measured. The method is well-suited for difference tomography experiments using electrically conductive tracers. The CFDS method was tested using cross-hole radar data collected at the U.S. Geological Survey Fractured Rock Research Site at Mirror Lake, New Hampshire (NH) during a saline-tracer injection experiment. The attenuation-difference tomogram created with the CFDS method outlines the spatial distribution of saline tracer within the tomography plane. ?? 1998 Elsevier Science B.V. All rights reserved.

  17. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  18. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar

    PubMed Central

    Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping

    2015-01-01

    A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385

  19. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar.

    PubMed

    Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping

    2015-12-14

    A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.

  20. Method and system for efficient video compression with low-complexity encoder

    NASA Technical Reports Server (NTRS)

    Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)

    2012-01-01

    Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.

  1. What to Do about Zero Frequency Cells when Estimating Polychoric Correlations

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2011-01-01

    Categorical structural equation modeling (SEM) methods that fit the model to estimated polychoric correlations have become popular in the social sciences. When population thresholds are high in absolute value, contingency tables in small samples are likely to contain zero frequency cells. Such cells make the estimation of the polychoric…

  2. A new lithium-ion battery internal temperature on-line estimate method based on electrochemical impedance spectroscopy measurement

    NASA Astrophysics Data System (ADS)

    Zhu, J. G.; Sun, Z. C.; Wei, X. Z.; Dai, H. F.

    2015-01-01

    The power battery thermal management problem in EV (electric vehicle) and HEV (hybrid electric vehicle) has been widely discussed, and EIS (electrochemical impedance spectroscopy) is an effective experimental method to test and estimate the status of the battery. Firstly, an electrochemical-based impedance matrix analysis for lithium-ion battery is developed to describe the impedance response of electrochemical impedance spectroscopy. Then a method, based on electrochemical impedance spectroscopy measurement, has been proposed to estimate the internal temperature of power lithium-ion battery by analyzing the phase shift and magnitude of impedance at different ambient temperatures. Respectively, the SoC (state of charge) and temperature have different effects on the impedance characteristics of battery at various frequency ranges in the electrochemical impedance spectroscopy experimental study. Also the impedance spectrum affected by SoH (state of health) is discussed in the paper preliminary. Therefore, the excitation frequency selected to estimate the inner temperature is in the frequency range which is significantly influenced by temperature without the SoC and SoH. The intrinsic relationship between the phase shift and temperature is established under the chosen excitation frequency. And the magnitude of impedance related to temperature is studied in the paper. In practical applications, through obtaining the phase shift and magnitude of impedance, the inner temperature estimation could be achieved. Then the verification experiments are conduced to validate the estimate method. Finally, an estimate strategy and an on-line estimation system implementation scheme utilizing battery management system are presented to describe the engineering value.

  3. A time and frequency synchronization method for CO-OFDM based on CMA equalizers

    NASA Astrophysics Data System (ADS)

    Ren, Kaixuan; Li, Xiang; Huang, Tianye; Cheng, Zhuo; Chen, Bingwei; Wu, Xu; Fu, Songnian; Ping, Perry Shum

    2018-06-01

    In this paper, an efficient time and frequency synchronization method based on a new training symbol structure is proposed for polarization division multiplexing (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The coarse timing synchronization is achieved by exploiting the correlation property of the first training symbol, and the fine timing synchronization is accomplished by using the time-domain symmetric conjugate of the second training symbol. Furthermore, based on these training symbols, a constant modulus algorithm (CMA) is proposed for carrier frequency offset (CFO) estimation. Theoretical analysis and simulation results indicate that the algorithm has the advantages of robustness to poor optical signal-to-noise ratio (OSNR) and chromatic dispersion (CD). The frequency offset estimation range can achieve [ -Nsc/2 ΔfN , + Nsc/2 ΔfN ] GHz with the mean normalized estimation error below 12 × 10-3 even under the condition of OSNR as low as 10 dB.

  4. Magnitude and Frequency of Floods on Nontidal Streams in Delaware

    USGS Publications Warehouse

    Ries, Kernell G.; Dillow, Jonathan J.A.

    2006-01-01

    Reliable estimates of the magnitude and frequency of annual peak flows are required for the economical and safe design of transportation and water-conveyance structures. This report, done in cooperation with the Delaware Department of Transportation (DelDOT) and the Delaware Geological Survey (DGS), presents methods for estimating the magnitude and frequency of floods on nontidal streams in Delaware at locations where streamgaging stations monitor streamflow continuously and at ungaged sites. Methods are presented for estimating the magnitude of floods for return frequencies ranging from 2 through 500 years. These methods are applicable to watersheds exhibiting a full range of urban development conditions. The report also describes StreamStats, a web application that makes it easy to obtain flood-frequency estimates for user-selected locations on Delaware streams. Flood-frequency estimates for ungaged sites are obtained through a process known as regionalization, using statistical regression analysis, where information determined for a group of streamgaging stations within a region forms the basis for estimates for ungaged sites within the region. One hundred and sixteen streamgaging stations in and near Delaware with at least 10 years of non-regulated annual peak-flow data available were used in the regional analysis. Estimates for gaged sites are obtained by combining the station peak-flow statistics (mean, standard deviation, and skew) and peak-flow estimates with regional estimates of skew and flood-frequency magnitudes. Example flood-frequency estimate calculations using the methods presented in the report are given for: (1) ungaged sites, (2) gaged locations, (3) sites upstream or downstream from a gaged location, and (4) sites between gaged locations. Regional regression equations applicable to ungaged sites in the Piedmont and Coastal Plain Physiographic Provinces of Delaware are presented. The equations incorporate drainage area, forest cover, impervious area, basin storage, housing density, soil type A, and mean basin slope as explanatory variables, and have average standard errors of prediction ranging from 28 to 72 percent. Additional regression equations that incorporate drainage area and housing density as explanatory variables are presented for use in defining the effects of urbanization on peak-flow estimates throughout Delaware for the 2-year through 500-year recurrence intervals, along with suggestions for their appropriate use in predicting development-affected peak flows. Additional topics associated with the analyses performed during the study are also discussed, including: (1) the availability and description of more than 30 basin and climatic characteristics considered during the development of the regional regression equations; (2) the treatment of increasing trends in the annual peak-flow series identified at 18 gaged sites, with respect to their relations with maximum 24-hour precipitation and housing density, and their use in the regional analysis; (3) calculation of the 90-percent confidence interval associated with peak-flow estimates from the regional regression equations; and (4) a comparison of flood-frequency estimates at gages used in a previous study, highlighting the effects of various improved analytical techniques.

  5. Subsurface attenuation estimation using a novel hybrid method based on FWE function and power spectrum

    NASA Astrophysics Data System (ADS)

    Li, Jingnan; Wang, Shangxu; Yang, Dengfeng; Tang, Genyang; Chen, Yangkang

    2018-02-01

    Seismic waves propagating in the subsurface suffer from attenuation, which can be represented by the quality factor Q. Knowledge of Q plays a vital role in hydrocarbon exploration. Many methods to measure Q have been proposed, among which the central frequency shift (CFS) and the peak frequency shift (PFS) are commonly used. However, both methods are under the assumption of a particular shape for amplitude spectra, which will cause systematic error in Q estimation. Recently a new method to estimate Q has been proposed to overcome this disadvantage by using frequency weighted exponential (FWE) function to fit amplitude spectra of different shapes. In the FWE method, a key procedure is to calculate the central frequency and variance of the amplitude spectrum. However, the amplitude spectrum is susceptible to noise, whereas the power spectrum is less sensitive to random noise and has better anti-noise performance. To enhance the robustness of the FWE method, we propose a novel hybrid method by combining the advantage of the FWE method and the power spectrum, which is called the improved FWE method (IFWE). The basic idea is to consider the attenuation of the power spectrum instead of the amplitude spectrum and to use a modified FWE function to fit power spectra, according to which we derive a new Q estimation formula. Tests of noisy synthetic data show that the IFWE are more robust than the FWE. Moreover, the frequency bandwidth selection in the IFWE can be more flexible than that in the FWE. The application to field vertical seismic profile data and surface seismic data further demonstrates its validity.

  6. Attenuation analysis of real GPR wavelets: The equivalent amplitude spectrum (EAS)

    NASA Astrophysics Data System (ADS)

    Economou, Nikos; Kritikakis, George

    2016-03-01

    Absorption of a Ground Penetrating Radar (GPR) pulse is a frequency dependent attenuation mechanism which causes a spectral shift on the dominant frequency of GPR data. Both energy variation of GPR amplitude spectrum and spectral shift were used for the estimation of Quality Factor (Q*) and subsequently the characterization of the subsurface material properties. The variation of the amplitude spectrum energy has been studied by Spectral Ratio (SR) method and the frequency shift by the estimation of the Frequency Centroid Shift (FCS) or the Frequency Peak Shift (FPS) methods. The FPS method is more automatic, less robust. This work aims to increase the robustness of the FPS method by fitting a part of the amplitude spectrum of GPR data with Ricker, Gaussian, Sigmoid-Gaussian or Ricker-Gaussian functions. These functions fit different parts of the spectrum of a GPR reference wavelet and the Equivalent Amplitude Spectrum (EAS) is selected, reproducing Q* values used in forward Q* modeling analysis. Then, only the peak frequencies and the time differences between the reference wavelet and the subsequent reflected wavelets are used to estimate Q*. As long as the EAS is estimated, it is used for Q* evaluation in all the GPR section, under the assumption that the selected reference wavelet is representative. De-phasing and constant phase shift, for obtaining symmetrical wavelets, proved useful in the sufficiency of the horizons picking. Synthetic, experimental and real GPR data were examined in order to demonstrate the effectiveness of the proposed methodology.

  7. Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions

    NASA Astrophysics Data System (ADS)

    Vermeulen, Petrus

    2017-04-01

    A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.

  8. Receiver IQ mismatch estimation in PDM CO-OFDM system using training symbol

    NASA Astrophysics Data System (ADS)

    Peng, Dandan; Ma, Xiurong; Yao, Xin; Zhang, Haoyuan

    2017-07-01

    Receiver in-phase/quadrature (IQ) mismatch is hard to mitigate at the receiver via using conventional method in polarization division multiplexed (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. In this paper, a novel training symbol structure is proposed to estimate IQ mismatch and channel distortion. Combined this structure with Gram Schmidt orthogonalization procedure (GSOP) algorithm, we can get lower bit error rate (BER). Meanwhile, based on this structure one estimation method is deduced in frequency domain which can achieve the estimation of IQ mismatch and channel distortion independently and improve the system performance obviously. Numerical simulation shows that the proposed two methods have better performance than compared method at 100 Gb/s after 480 km fiber transmission. Besides, the calculation complexity is also analyzed.

  9. On Short-Time Estimation of Vocal Tract Length from Formant Frequencies

    PubMed Central

    Lammert, Adam C.; Narayanan, Shrikanth S.

    2015-01-01

    Vocal tract length is highly variable across speakers and determines many aspects of the acoustic speech signal, making it an essential parameter to consider for explaining behavioral variability. A method for accurate estimation of vocal tract length from formant frequencies would afford normalization of interspeaker variability and facilitate acoustic comparisons across speakers. A framework for considering estimation methods is developed from the basic principles of vocal tract acoustics, and an estimation method is proposed that follows naturally from this framework. The proposed method is evaluated using acoustic characteristics of simulated vocal tracts ranging from 14 to 19 cm in length, as well as real-time magnetic resonance imaging data with synchronous audio from five speakers whose vocal tracts range from 14.5 to 18.0 cm in length. Evaluations show improvements in accuracy over previously proposed methods, with 0.631 and 1.277 cm root mean square error on simulated and human speech data, respectively. Empirical results show that the effectiveness of the proposed method is based on emphasizing higher formant frequencies, which seem less affected by speech articulation. Theoretical predictions of formant sensitivity reinforce this empirical finding. Moreover, theoretical insights are explained regarding the reason for differences in formant sensitivity. PMID:26177102

  10. Dynamic modulus estimation and structural vibration analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, A.

    1998-11-18

    Often the dynamic elastic modulus of a material with frequency dependent properties is difficult to estimate. These uncertainties are compounded in any structural vibration analysis using the material properties. Here, different experimental techniques are used to estimate the properties of a particular elastomeric material over a broad frequency range. Once the properties are determined, various structures incorporating the elastomer are analyzed by an interactive finite element method to determine natural frequencies and mode shapes. Then, the finite element results are correlated with results obtained by experimental modal analysis.

  11. Investigation of spectral analysis techniques for randomly sampled velocimetry data

    NASA Technical Reports Server (NTRS)

    Sree, Dave

    1993-01-01

    It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable prefiltering technique. But, this increased bandwidth comes at the cost of the lower frequency estimates. The studies further showed that large data sets of the order of 100,000 points, or more, high data rates, and Poisson sampling are very crucial for obtaining reliable spectral estimates from randomly sampled data, such as LV data. Some of the results of the current study are presented.

  12. Tracking of electrochemical impedance of batteries

    NASA Astrophysics Data System (ADS)

    Piret, H.; Granjon, P.; Guillet, N.; Cattin, V.

    2016-04-01

    This paper presents an evolutionary battery impedance estimation method, which can be easily embedded in vehicles or nomad devices. The proposed method not only allows an accurate frequency impedance estimation, but also a tracking of its temporal evolution contrary to classical electrochemical impedance spectroscopy methods. Taking into account constraints of cost and complexity, we propose to use the existing electronics of current control to perform a frequency evolutionary estimation of the electrochemical impedance. The developed method uses a simple wideband input signal, and relies on a recursive local average of Fourier transforms. The averaging is controlled by a single parameter, managing a trade-off between tracking and estimation performance. This normalized parameter allows to correctly adapt the behavior of the proposed estimator to the variations of the impedance. The advantage of the proposed method is twofold: the method is easy to embed into a simple electronic circuit, and the battery impedance estimator is evolutionary. The ability of the method to monitor the impedance over time is demonstrated on a simulator, and on a real Lithium ion battery, on which a repeatability study is carried out. The experiments reveal good tracking results, and estimation performance as accurate as the usual laboratory approaches.

  13. Estimation of modal parameters using bilinear joint time frequency distributions

    NASA Astrophysics Data System (ADS)

    Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.

    2007-07-01

    In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.

  14. Gap Detection and Temporal Modulation Transfer Function as Behavioral Estimates of Auditory Temporal Acuity Using Band-Limited Stimuli in Young and Older Adults

    PubMed Central

    Shen, Yi

    2015-01-01

    Purpose Gap detection and the temporal modulation transfer function (TMTF) are 2 common methods to obtain behavioral estimates of auditory temporal acuity. However, the agreement between the 2 measures is not clear. This study compares results from these 2 methods and their dependencies on listener age and hearing status. Method Gap detection thresholds and the parameters that describe the TMTF (sensitivity and cutoff frequency) were estimated for young and older listeners who were naive to the experimental tasks. Stimuli were 800-Hz-wide noises with upper frequency limits of 2400 Hz, presented at 85 dB SPL. A 2-track procedure (Shen & Richards, 2013) was used for the efficient estimation of the TMTF. Results No significant correlation was found between gap detection threshold and the sensitivity or the cutoff frequency of the TMTF. No significant effect of age and hearing loss on either the gap detection threshold or the TMTF cutoff frequency was found, while the TMTF sensitivity improved with increasing hearing threshold and worsened with increasing age. Conclusion Estimates of temporal acuity using gap detection and TMTF paradigms do not seem to provide a consistent description of the effects of listener age and hearing status on temporal envelope processing. PMID:25087722

  15. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  16. Flood risk assessment in France: comparison of extreme flood estimation methods (EXTRAFLO project, Task 7)

    NASA Astrophysics Data System (ADS)

    Garavaglia, F.; Paquet, E.; Lang, M.; Renard, B.; Arnaud, P.; Aubert, Y.; Carre, J.

    2013-12-01

    In flood risk assessment the methods can be divided in two families: deterministic methods and probabilistic methods. In the French hydrologic community the probabilistic methods are historically preferred to the deterministic ones. Presently a French research project named EXTRAFLO (RiskNat Program of the French National Research Agency, https://extraflo.cemagref.fr) deals with the design values for extreme rainfall and floods. The object of this project is to carry out a comparison of the main methods used in France for estimating extreme values of rainfall and floods, to obtain a better grasp of their respective fields of application. In this framework we present the results of Task 7 of EXTRAFLO project. Focusing on French watersheds, we compare the main extreme flood estimation methods used in French background: (i) standard flood frequency analysis (Gumbel and GEV distribution), (ii) regional flood frequency analysis (regional Gumbel and GEV distribution), (iii) local and regional flood frequency analysis improved by historical information (Naulet et al., 2005), (iv) simplify probabilistic method based on rainfall information (i.e. Gradex method (CFGB, 1994), Agregee method (Margoum, 1992) and Speed method (Cayla, 1995)), (v) flood frequency analysis by continuous simulation approach and based on rainfall information (i.e. Schadex method (Paquet et al., 2013, Garavaglia et al., 2010), Shyreg method (Lavabre et al., 2003)) and (vi) multifractal approach. The main result of this comparative study is that probabilistic methods based on additional information (i.e. regional, historical and rainfall information) provide better estimations than the standard flood frequency analysis. Another interesting result is that, the differences between the various extreme flood quantile estimations of compared methods increase with return period, staying relatively moderate up to 100-years return levels. Results and discussions are here illustrated throughout with the example of five watersheds located in the South of France. References : O. CAYLA : Probability calculation of design floods abd inflows - SPEED. Waterpower 1995, San Francisco, California 1995 CFGB : Design flood determination by the gradex method. Bulletin du Comité Français des Grands Barrages News 96, 18th congress CIGB-ICOLD n2, nov:108, 1994. F. GARAVAGLIA et al. : Introducing a rainfall compound distribution model based on weather patterns subsampling. Hydrology and Earth System Sciences, 14, 951-964, 2010. J. LAVABRE et al. : SHYREG : une méthode pour l'estimation régionale des débits de crue. application aux régions méditerranéennes françaises. Ingénierie EAT, 97-111, 2003. M. MARGOUM : Estimation des crues rares et extrêmes : le modèle AGREGEE. Conceptions et remières validations. PhD, Ecole des Mines de Paris, 1992. R. NAULET et al. : Flood frequency analysis on the Ardèche river using French documentary sources from the two last centuries. Journal of Hydrology, 313:58-78, 2005. E. PAQUET et al. : The SCHADEX method: A semi-continuous rainfall-runoff simulation for extreme flood estimation, Journal of Hydrology, 495, 23-37, 2013.

  17. System identification of velocity mechanomyogram measured with a capacitor microphone for muscle stiffness estimation.

    PubMed

    Uchiyama, Takanori; Tomoshige, Taiki

    2017-04-01

    A mechanomyogram (MMG) measured with a displacement sensor (displacement MMG) can provide a better estimation of longitudinal muscle stiffness than that measured with an acceleration sensor (acceleration MMG), but the displacement MMG cannot provide transverse muscle stiffness. We propose a method to estimate both longitudinal and transverse muscle stiffness from a velocity MMG using a system identification technique. The aims of this study are to show the advantages of the proposed method. The velocity MMG was measured using a capacitor microphone and a differential circuit, and the MMG, evoked by electrical stimulation, of the tibialis anterior muscle was measured five times in seven healthy young male volunteers. The evoked MMG system was identified using the singular value decomposition method and was approximated with a fourth-order model, which provides two undamped natural frequencies corresponding to the longitudinal and transverse muscle stiffness. The fluctuation of the undamped natural frequencies estimated from the velocity MMG was significantly smaller than that from the acceleration MMG. There was no significant difference between the fluctuations of the undamped natural frequencies estimated from the velocity MMG and that from the displacement MMG. The proposed method using the velocity MMG is thus more advantageous for muscle stiffness estimation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Method to estimate center of rigidity using vibration recordings

    USGS Publications Warehouse

    Safak, Erdal; Çelebi, Mehmet

    1990-01-01

    A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.

  19. Magnitude and frequency of floods in Arkansas

    USGS Publications Warehouse

    Hodge, Scott A.; Tasker, Gary D.

    1995-01-01

    Methods are presented for estimating the magnitude and frequency of peak discharges of streams in Arkansas. Regression analyses were developed in which a stream's physical and flood characteristics were related. Four sets of regional regression equations were derived to predict peak discharges with selected recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years on streams draining less than 7,770 square kilometers. The regression analyses indicate that size of drainage area, main channel slope, mean basin elevation, and the basin shape factor were the most significant basin characteristics that affect magnitude and frequency of floods. The region of influence method is included in this report. This method is still being improved and is to be considered only as a second alternative to the standard method of producing regional regression equations. This method estimates unique regression equations for each recurrence interval for each ungaged site. The regression analyses indicate that size of drainage area, main channel slope, mean annual precipitation, mean basin elevation, and the basin shape factor were the most significant basin and climatic characteristics that affect magnitude and frequency of floods for this method. Certain recommendations on the use of this method are provided. A method is described for estimating the magnitude and frequency of peak discharges of streams for urban areas in Arkansas. The method is from a nationwide U.S. Geeological Survey flood frequency report which uses urban basin characteristics combined with rural discharges to estimate urban discharges. Annual peak discharges from 204 gaging stations, with drainage areas less than 7,770 square kilometers and at least 10 years of unregulated record, were used in the analysis. These data provide the basis for this analysis and are published in the Appendix of this report as supplemental data. Large rivers such as the Red, Arkansas, White, Black, St. Francis, Mississippi, and Ouachita Rivers have floodflow characteristics that differ from those of smaller tributary streams and were treated individually. Regional regression equations are not applicable to these large rivers. The magnitude and frequency of floods along these rivers are based on specific station data. This section is provided in the Appendix and has not been updated since the last Arkansas flood frequency report (1987b), but is included at the request of the cooperator.

  20. An improved peak frequency shift method for Q estimation based on generalized seismic wavelet function

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Gao, Jinghuai

    2018-02-01

    As a powerful tool for hydrocarbon detection and reservoir characterization, the quality factor, Q, provides useful information in seismic data processing and interpretation. In this paper, we propose a novel method for Q estimation. The generalized seismic wavelet (GSW) function was introduced to fit the amplitude spectrum of seismic waveforms with two parameters: fractional value and reference frequency. Then we derive an analytical relation between the GSW function and the Q factor of the medium. When a seismic wave propagates through a viscoelastic medium, the GSW function can be employed to fit the amplitude spectrum of the source and attenuated wavelets, then the fractional values and reference frequencies can be evaluated numerically from the discrete Fourier spectrum. After calculating the peak frequency based on the obtained fractional value and reference frequency, the relationship between the GSW function and the Q factor can be built by the conventional peak frequency shift method. Synthetic tests indicate that our method can achieve higher accuracy and be more robust to random noise compared with existing methods. Furthermore, the proposed method is applicable to different types of source wavelet. Field data application also demonstrates the effectiveness of our method in seismic attenuation and the potential in the reservoir characteristic.

  1. THE REAL McCOIL: A method for the concurrent estimation of the complexity of infection and SNP allele frequency for malaria parasites

    PubMed Central

    Chang, Hsiao-Han; Worby, Colin J.; Yeka, Adoke; Nankabirwa, Joaniter; Kamya, Moses R.; Staedke, Sarah G.; Hubbart, Christina; Amato, Roberto; Kwiatkowski, Dominic P.

    2017-01-01

    As many malaria-endemic countries move towards elimination of Plasmodium falciparum, the most virulent human malaria parasite, effective tools for monitoring malaria epidemiology are urgent priorities. P. falciparum population genetic approaches offer promising tools for understanding transmission and spread of the disease, but a high prevalence of multi-clone or polygenomic infections can render estimation of even the most basic parameters, such as allele frequencies, challenging. A previous method, COIL, was developed to estimate complexity of infection (COI) from single nucleotide polymorphism (SNP) data, but relies on monogenomic infections to estimate allele frequencies or requires external allele frequency data which may not available. Estimates limited to monogenomic infections may not be representative, however, and when the average COI is high, they can be difficult or impossible to obtain. Therefore, we developed THE REAL McCOIL, Turning HEterozygous SNP data into Robust Estimates of ALelle frequency, via Markov chain Monte Carlo, and Complexity Of Infection using Likelihood, to incorporate polygenomic samples and simultaneously estimate allele frequency and COI. This approach was tested via simulations then applied to SNP data from cross-sectional surveys performed in three Ugandan sites with varying malaria transmission. We show that THE REAL McCOIL consistently outperforms COIL on simulated data, particularly when most infections are polygenomic. Using field data we show that, unlike with COIL, we can distinguish epidemiologically relevant differences in COI between and within these sites. Surprisingly, for example, we estimated high average COI in a peri-urban subregion with lower transmission intensity, suggesting that many of these cases were imported from surrounding regions with higher transmission intensity. THE REAL McCOIL therefore provides a robust tool for understanding the molecular epidemiology of malaria across transmission settings. PMID:28125584

  2. POD and PPP with multi-frequency processing

    NASA Astrophysics Data System (ADS)

    Roldán, Pedro; Navarro, Pedro; Rodríguez, Daniel; Rodríguez, Irma

    2017-04-01

    Precise Orbit Determination (POD) and Precise Point Positioning (PPP) are methods for estimating the orbits and clocks of GNSS satellites and the precise positions and clocks of user receivers. These methods are traditionally based on processing the ionosphere-free combination. With this combination, the delay introduced in the signal when passing through the ionosphere is removed, taking advantage of the dependency of this delay with the square of the frequency. It is also possible to process the individual frequencies, but in this case it is needed to properly model the ionospheric delay. This modelling is usually very challenging, as the electron content in the ionosphere experiences important temporal and spatial variations. These two options define the two main kinds of processing: the dual-frequency ionosphere-free processing, typically used in the POD and in certain applications of PPP, and the single-frequency processing with estimation or modelisation of the ionosphere, mostly used in the PPP processing. In magicGNSS, a software tool developed by GMV for POD and PPP, a hybrid approach has been implemented. This approach combines observations from any number of individual frequencies and any number of ionosphere-free combinations of these frequencies. In such a way, the observations of ionosphere-free combination allow a better estimation of positions and orbits, while the inclusion of observations from individual frequencies allows to estimate the ionospheric delay and to reduce the noise of the solution. It is also possible to include other kind of combinations, such as geometry-free combination, instead of processing individual frequencies. The joint processing of all the frequencies for all the constellations requires both the estimation or modelisation of ionospheric delay and the estimation of inter-frequency biases. The ionospheric delay can be estimated from the single-frequency or dual-frequency geometry-free observations, but it is also possible to use a-priori information based on ionospheric models, on external estimations and on the expected behavior of the ionosphere. The inter-frequency biases appear because the delay of the signal inside the transmitter and the receiver strongly depends on its frequency. However, it is possible to include constraints in the estimator regarding these delays, assuming small variations over time. By using different types of combinations, all the available information from GNSS systems can be included in the processing. This is especially interesting for the case of Galileo satellites, which transmit in several frequencies, and the GPS IIF satellites, which transmit in L5 in addition to the traditional L1 and L2. Several experiments have been performed, to assess the improvement on performance of POD and PPP when using all the constellations and all the available frequencies for each constellation. This paper describes the new approach of multi-frequency processing, including the estimation of biases and ionospheric delays impacting on GNSS observations, and presents the results of the performed experimentation activities to assess the benefits in POD and PPP algorithms.

  3. Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines

    NASA Astrophysics Data System (ADS)

    Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž

    2017-05-01

    This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces

  4. Techniques for estimating magnitude and frequency of floods in Minnesota

    USGS Publications Warehouse

    Guetzkow, Lowell C.

    1977-01-01

     Estimating relations have been developed to provide engineers and designers with improved techniques for defining flow-frequency characteristics to satisfy hydraulic planning and design requirements. The magnitude and frequency of floods up to the 100-year recurrence interval can be determined for most streams in Minnesota by methods presented. By multiple regression analysis, equations have been developed for estimating flood-frequency relations at ungaged sites on natural flow streams. Eight distinct hydrologic regions are delineated within the State with boundaries defined generally by river basin divides. Regression equations are provided for each region which relate selected frequency floods to significant basin parameters. For main-stem streams, graphs are presented showing floods for selected recurrence intervals plotted against contributing drainage area. Flow-frequency estimates for intervening sites along the Minnesota River, Mississippi River, and the Red River of the North can be derived from these graphs. Flood-frequency characteristics are tabulated for 201 paging stations having 10 or more years of record.

  5. Design rainfall depth estimation through two regional frequency analysis methods in Hanjiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Xu, Yue-Ping; Yu, Chaofeng; Zhang, Xujie; Zhang, Qingqing; Xu, Xiao

    2012-02-01

    Hydrological predictions in ungauged basins are of significant importance for water resources management. In hydrological frequency analysis, regional methods are regarded as useful tools in estimating design rainfall/flood for areas with only little data available. The purpose of this paper is to investigate the performance of two regional methods, namely the Hosking's approach and the cokriging approach, in hydrological frequency analysis. These two methods are employed to estimate 24-h design rainfall depths in Hanjiang River Basin, one of the largest tributaries of Yangtze River, China. Validation is made through comparing the results to those calculated from the provincial handbook approach which uses hundreds of rainfall gauge stations. Also for validation purpose, five hypothetically ungauged sites from the middle basin are chosen. The final results show that compared to the provincial handbook approach, the Hosking's approach often overestimated the 24-h design rainfall depths while the cokriging approach most of the time underestimated. Overall, the Hosking' approach produced more accurate results than the cokriging approach.

  6. Estimation of multiple accelerated motions using chirp-Fourier transform and clustering.

    PubMed

    Alexiadis, Dimitrios S; Sergiadis, George D

    2007-01-01

    Motion estimation in the spatiotemporal domain has been extensively studied and many methodologies have been proposed, which, however, cannot handle both time-varying and multiple motions. Extending previously published ideas, we present an efficient method for estimating multiple, linearly time-varying motions. It is shown that the estimation of accelerated motions is equivalent to the parameter estimation of superpositioned chirp signals. From this viewpoint, one can exploit established signal processing tools such as the chirp-Fourier transform. It is shown that accelerated motion results in energy concentration along planes in the 4-D space: spatial frequencies-temporal frequency-chirp rate. Using fuzzy c-planes clustering, we estimate the plane/motion parameters. The effectiveness of our method is verified on both synthetic as well as real sequences and its advantages are highlighted.

  7. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  8. Using the Bivariate Dale Model to jointly estimate predictors of frequency and quantity of alcohol use.

    PubMed

    McMillan, Garnett P; Hanson, Tim; Bedrick, Edward J; Lapham, Sandra C

    2005-09-01

    This study demonstrates the usefulness of the Bivariate Dale Model (BDM) as a method for estimating the relationship between risk factors and the quantity and frequency of alcohol use, as well as the degree of association between these highly correlated drinking measures. The BDM is used to evaluate childhood sexual abuse, along with age and gender, as risk factors for the quantity and frequency of beer consumption in a sample of driving-while-intoxicated (DWI) offenders (N = 1,964; 1,612 men). The BDM allows one to estimate the relative odds of drinking up to each level of ordinal-scaled quantity and frequency of alcohol use, as well as model the degree of association between quantity and frequency of alcohol consumption as a function of covariates. Individuals who experienced childhood sexual abuse have increased risks of higher quantity and frequency of beer consumption. History of childhood sexual abuse has a greater effect on women, causing them to drink higher quantities of beer per drinking occasion. The BDM is a useful method for evaluating predictors of the quantity-frequency of alcohol consumption. SAS macrocode for fitting the BDM model is provided.

  9. The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppeliers, Christian

    This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less

  10. Estimation of bedrock depth using the horizontal‐to‐vertical (H/V) ambient‐noise seismic method

    USGS Publications Warehouse

    Lane, John W.; White, Eric A.; Steele, Gregory V.; Cannia, James C.

    2008-01-01

    Estimating sediment thickness and the geometry of the bedrock surface is a key component of many hydrogeologic studies. The horizontal‐to‐vertical (H/V) ambient‐noise seismic method is a novel, non‐invasive technique that can be used to rapidly estimate the depth to bedrock. The H/V method uses a single, broad‐band three‐component seismometer to record ambient seismic noise. The ratio of the averaged horizontal‐to‐vertical frequency spectrum is used to determine the fundamental site resonance frequency, which can be interpreted using regression equations to estimate sediment thickness and depth to bedrock. The U.S. Geological Survey used the H/V seismic method during fall 2007 at 11 sites in Cape Cod, Massachusetts, and 13 sites in eastern Nebraska. In Cape Cod, H/V measurements were acquired along a 60‐kilometer (km) transect between Chatham and Provincetown, where glacial sediments overlie metamorphic rock. In Nebraska, H/V measurements were acquired along approximately 11‐ and 14‐km transects near Firth and Oakland, respectively, where glacial sediments overlie weathered sedimentary rock. The ambient‐noise seismic data from Cape Cod produced clear, easily identified resonance frequency peaks. The interpreted depth and geometry of the bedrock surface correlate well with boring data and previously published seismic refraction surveys. Conversely, the ambient‐noise seismic data from eastern Nebraska produced subtle resonance frequency peaks, and correlation of the interpreted bedrock surface with bedrock depths from borings is poor, which may indicate a low acoustic impedance contrast between the weathered sedimentary rock and overlying sediments and/or the effect of wind noise on the seismic records. Our results indicate the H/V ambient‐noise seismic method can be used effectively to estimate the depth to rock where there is a significant acoustic impedance contrast between the sediments and underlying rock. However, effective use of the method is challenging in the presence of gradational contacts such as gradational weathering or cementation. Further work is needed to optimize interpretation of resonance frequencies in the presence of extreme wind noise. In addition, local estimates of bedrock depth likely could be improved through development of regional or study‐area‐specific regression equations relating resonance frequency to bedrock depth.

  11. A rapid method of estimating the collision frequencies between the earth and the earth-crossing bodies

    NASA Technical Reports Server (NTRS)

    Su, Shin-Yi; Kessler, Donald J.

    1991-01-01

    The present study examines a very fast method of calculating the collision frequency between two low-eccentricity orbiting bodies for evaluating the evolution of earth-orbiting objects such as space debris. The results are very accurate and the required computer time is negligible. The method is now applied without modification to calculate the collision frequencies for moderately and highly eccentric orbits.

  12. Ultrasonic Porosity Estimation of Low-Porosity Ceramic Samples

    NASA Astrophysics Data System (ADS)

    Eskelinen, J.; Hoffrén, H.; Kohout, T.; Hæggström, E.; Pesonen, L. J.

    2007-03-01

    We report on efforts to extend the applicability of an airborne ultrasonic pulse-reflection (UPR) method towards lower porosities. UPR is a method that has been used successfully to estimate porosity and tortuosity of high porosity foams. UPR measures acoustical reflectivity of a target surface at two or more incidence angles. We used ceramic samples to evaluate the feasibility of extending the UPR range into low porosities (<35%). The validity of UPR estimates depends on pore size distribution and probing frequency as predicted by the theoretical boundary conditions of the used equivalent fluid model under the high-frequency approximation.

  13. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  14. Evaluating and addressing the effects of regression to the mean phenomenon in estimating collision frequencies on urban high collision concentration locations.

    PubMed

    Lee, Jinwoo; Chung, Koohong; Kang, Seungmo

    2016-12-01

    Two different methods for addressing the regression to the mean phenomenon (RTM) were evaluated using empirical data: Data from 110 miles of freeway located in California were used to evaluate the performance of the EB and CRP methods in addressing RTM. CRP outperformed the EB method in estimating collision frequencies in selected high collision concentration locations (HCCLs). Findings indicate that the performance of the EB method can be markedly affected when SPF is biased, while the performance of CRP remains much less affected. The CRP method was more effective in addressing RTM. Published by Elsevier Ltd.

  15. Application of Model Based Parameter Estimation for RCS Frequency Response Calculations Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.

    1998-01-01

    An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.

  16. Efficient methods for joint estimation of multiple fundamental frequencies in music signals

    NASA Astrophysics Data System (ADS)

    Pertusa, Antonio; Iñesta, José M.

    2012-12-01

    This study presents efficient techniques for multiple fundamental frequency estimation in music signals. The proposed methodology can infer harmonic patterns from a mixture considering interactions with other sources and evaluate them in a joint estimation scheme. For this purpose, a set of fundamental frequency candidates are first selected at each frame, and several hypothetical combinations of them are generated. Combinations are independently evaluated, and the most likely is selected taking into account the intensity and spectral smoothness of its inferred patterns. The method is extended considering adjacent frames in order to smooth the detection in time, and a pitch tracking stage is finally performed to increase the temporal coherence. The proposed algorithms were evaluated in MIREX contests yielding state of the art results with a very low computational burden.

  17. Matching synchrosqueezing transform: A useful tool for characterizing signals with fast varying instantaneous frequency and application to machine fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Shibin; Chen, Xuefeng; Selesnick, Ivan W.; Guo, Yanjie; Tong, Chaowei; Zhang, Xingwu

    2018-02-01

    Synchrosqueezing transform (SST) can effectively improve the readability of the time-frequency (TF) representation (TFR) of nonstationary signals composed of multiple components with slow varying instantaneous frequency (IF). However, for signals composed of multiple components with fast varying IF, SST still suffers from TF blurs. In this paper, we introduce a time-frequency analysis (TFA) method called matching synchrosqueezing transform (MSST) that achieves a highly concentrated TF representation comparable to the standard TF reassignment methods (STFRM), even for signals with fast varying IF, and furthermore, MSST retains the reconstruction benefit of SST. MSST captures the philosophy of STFRM to simultaneously consider time and frequency variables, and incorporates three estimators (i.e., the IF estimator, the group delay estimator, and a chirp-rate estimator) into a comprehensive and accurate IF estimator. In this paper, we first introduce the motivation of MSST with three heuristic examples. Then we introduce a precise mathematical definition of a class of chirp-like intrinsic-mode-type functions that locally can be viewed as a sum of a reasonably small number of approximate chirp signals, and we prove that MSST does indeed succeed in estimating chirp-rate and IF of arbitrary functions in this class and succeed in decomposing these functions. Furthermore, we describe an efficient numerical algorithm for the practical implementation of the MSST, and we provide an adaptive IF extraction method for MSST reconstruction. Finally, we verify the effectiveness of the MSST in practical applications for machine fault diagnosis, including gearbox fault diagnosis for a wind turbine in variable speed conditions and rotor rub-impact fault diagnosis for a dual-rotor turbofan engine.

  18. Intakes of culinary herbs and spices from a food frequency questionnaire evaluated against 28-days estimated records

    PubMed Central

    2011-01-01

    Background Worldwide, herbs and spices are much used food flavourings. However, little data exist regarding actual dietary intake of culinary herbs and spices. We developed a food frequency questionnaire (FFQ) for the assessment of habitual diet the preceding year, with focus on phytochemical rich food, including herbs and spices. The aim of the present study was to evaluate the intakes of herbs and spices from the FFQ with estimates of intake from another dietary assessment method. Thus we compared the intake estimates from the FFQ with 28 days of estimated records of herb and spice consumption as a reference method. Methods The evaluation study was conducted among 146 free living adults, who filled in the FFQ and 2-4 weeks later carried out 28 days recording of herb and spice consumption. The FFQ included a section with questions about 27 individual culinary herbs and spices, while the records were open ended records for recording of herbs and spice consumption exclusively. Results Our study showed that the FFQ obtained slightly higher estimates of total intake of herbs and spices than the total intake assessed by the Herbs and Spice Records (HSR). The correlation between the two assessment methods with regard to total intake was good (r = 0.5), and the cross-classification suggests that the FFQ may be used to classify subjects according to total herb and spice intake. For the 8 most frequently consumed individual herbs and spices, the FFQ obtained good estimates of median frequency of intake for 2 herbs/spices, while good estimates of portion sizes were obtained for 4 out of 8 herbs/spices. Conclusions Our results suggested that the FFQ was able to give good estimates of frequency of intake and portion sizes on group level for several of the most frequently used herbs and spices. The FFQ was only able to fairly rank subjects according to frequency of intake of the 8 most frequently consumed herbs and spices. Other studies are warranted to further explore the intakes of culinary spices and herbs. PMID:21575177

  19. Adjusted peak-flow frequency estimates for selected streamflow-gaging stations in or near Montana based on data through water year 2011: Chapter D in Montana StreamStats

    USGS Publications Warehouse

    Sando, Steven K.; Sando, Roy; McCarthy, Peter M.; Dutton, DeAnn M.

    2016-04-05

    The climatic conditions of the specific time period during which peak-flow data were collected at a given streamflow-gaging station (hereinafter referred to as gaging station) can substantially affect how well the peak-flow frequency (hereinafter referred to as frequency) results represent long-term hydrologic conditions. Differences in the timing of the periods of record can result in substantial inconsistencies in frequency estimates for hydrologically similar gaging stations. Potential for inconsistency increases with decreasing peak-flow record length. The representativeness of the frequency estimates for a short-term gaging station can be adjusted by various methods including weighting the at-site results in association with frequency estimates from regional regression equations (RREs) by using the Weighted Independent Estimates (WIE) program. Also, for gaging stations that cannot be adjusted by using the WIE program because of regulation or drainage areas too large for application of RREs, frequency estimates might be improved by using record extension procedures, including a mixed-station analysis using the maintenance of variance type I (MOVE.1) procedure. The U.S. Geological Survey, in cooperation with the Montana Department of Transportation and the Montana Department of Natural Resources and Conservation, completed a study to provide adjusted frequency estimates for selected gaging stations through water year 2011.The purpose of Chapter D of this Scientific Investigations Report is to present adjusted frequency estimates for 504 selected streamflow-gaging stations in or near Montana based on data through water year 2011. Estimates of peak-flow magnitudes for the 66.7-, 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities are reported. These annual exceedance probabilities correspond to the 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.The at-site frequency estimates were adjusted by weighting with frequency estimates from RREs using the WIE program for 438 selected gaging stations in Montana. These 438 selected gaging stations (1) had periods of record less than or equal to 40 years, (2) represented unregulated or minor regulation conditions, and (3) had drainage areas less than about 2,750 square miles.The weighted-average frequency estimates obtained by weighting with RREs generally are considered to provide improved frequency estimates. In some cases, there are substantial differences among the at-site frequency estimates, the regression-equation frequency estimates, and the weighted-average frequency estimates. In these cases, thoughtful consideration should be applied when selecting the appropriate frequency estimate. Some factors that might be considered when selecting the appropriate frequency estimate include (1) whether the specific gaging station has peak-flow characteristics that distinguish it from most other gaging stations used in developing the RREs for the hydrologic region; and (2) the length of the peak-flow record and the general climatic characteristics during the period when the peak-flow data were collected. For critical structure-design applications, a conservative approach would be to select the higher of the at-site frequency estimate and the weighted-average frequency estimate.The mixed-station MOVE.1 procedure generally was applied in cases where three or more gaging stations were located on the same large river and some of the gaging stations could not be adjusted using the weighted-average method because of regulation or drainage areas too large for application of RREs. The mixed-station MOVE.1 procedure was applied to 66 selected gaging stations on 19 large rivers.The general approach for using mixed-station record extension procedures to adjust at-site frequencies involved (1) determining appropriate base periods for the gaging stations on the large rivers, (2) synthesizing peak-flow data for the gaging stations with incomplete peak-flow records during the base periods by using the mixed-station MOVE.1 procedure, and (3) conducting frequency analysis on the combined recorded and synthesized peak-flow data for each gaging station. Frequency estimates for the combined recorded and synthesized datasets for 66 gaging stations with incomplete peak-flow records during the base periods are presented. The uncertainties in the mixed-station record extension results are difficult to directly quantify; thus, it is important to understand the intended use of the estimated frequencies based on analysis of the combined recorded and synthesized datasets. The estimated frequencies are considered general estimates of frequency relations among gaging stations on the same stream channel that might be expected if the gaging stations had been gaged during the same long-term base period. However, because the mixed-station record extension procedures involve secondary statistical analysis with accompanying errors, the uncertainty of the frequency estimates is larger than would be obtained by collecting systematic records for the same number of years in the base period.

  20. A simple method for estimating frequency response corrections for eddy covariance systems

    Treesearch

    W. J. Massman

    2000-01-01

    A simple analytical formula is developed for estimating the frequency attenuation of eddy covariance fluxes due to sensor response, path-length averaging, sensor separation, signal processing, and flux averaging periods. Although it is an approximation based on flat terrain cospectra, this analytical formula should have broader applicability than just flat-terrain...

  1. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  2. INSTRUMENTS AND METHODS OF INVESTIGATION: Spectral and spectral-frequency methods of investigating atmosphereless bodies of the Solar system

    NASA Astrophysics Data System (ADS)

    Busarev, Vladimir V.; Prokof'eva-Mikhailovskaya, Valentina V.; Bochkov, Valerii V.

    2007-06-01

    A method of reflectance spectrophotometry of atmosphereless bodies of the Solar system, its specificity, and the means of eliminating basic spectral noise are considered. As a development, joining the method of reflectance spectrophotometry with the frequency analysis of observational data series is proposed. The combined spectral-frequency method allows identification of formations with distinctive spectral features, and estimations of their sizes and distribution on the surface of atmospherelss celestial bodies. As applied to investigations of asteroids 21 Lutetia and 4 Vesta, the spectral frequency method has given us the possibility of obtaining fundamentally new information about minor planets.

  3. Tire-road friction coefficient estimation based on the resonance frequency of in-wheel motor drive system

    NASA Astrophysics Data System (ADS)

    Chen, Long; Bian, Mingyuan; Luo, Yugong; Qin, Zhaobo; Li, Keqiang

    2016-01-01

    In this paper, a resonance frequency-based tire-road friction coefficient (TRFC) estimation method is proposed by considering the dynamics performance of the in-wheel motor drive system under small slip ratio conditions. A frequency response function (FRF) is deduced for the drive system that is composed of a dynamic tire model and a simplified motor model. A linear relationship between the squared system resonance frequency and the TFRC is described with the FRF. Furthermore, the resonance frequency is identified by the Auto-Regressive eXogenous model using the information of the motor torque and the wheel speed, and the TRFC is estimated thereafter by a recursive least squares filter with the identified resonance frequency. Finally, the effectiveness of the proposed approach is demonstrated through simulations and experimental tests on different road surfaces.

  4. Frequency characteristics of vibration generated by dual acoustic radiation force for estimating viscoelastic properties of biological tissues

    NASA Astrophysics Data System (ADS)

    Watanabe, Ryoichi; Arakawa, Mototaka; Kanai, Hiroshi

    2018-07-01

    We proposed a new method for estimating the viscoelastic property of the local region of a sample. The viscoelastic parameters of the phantoms simulating the biological tissues were quantitatively estimated by analyzing the frequency characteristics of displacement generated by acoustic excitation. The samples were locally strained by irradiating them with the ultrasound simultaneously generated from two point-focusing transducers by applying the sum of two signals with slightly different frequencies of approximately 1 MHz. The surface of a phantom was excited in the frequency range of 20–2,000 Hz, and its displacement was measured. The frequency dependence of the acceleration provided by the acoustic radiation force was also measured. From these results, we determined the frequency characteristics of the transfer function from the stress to the strain and estimated the ratio of the elastic modulus to the viscosity modulus (K/η) by fitting the data to the Maxwell model. Moreover, the elastic modulus K was separately estimated from the measured sound velocity and density of the phantom, and the viscosity modulus η was evaluated by substituting the estimated elastic modulus into the obtained K/η ratio.

  5. Experimental demonstration of OFDM/OQAM transmission with DFT-based channel estimation for visible laser light communications

    NASA Astrophysics Data System (ADS)

    He, Jing; Shi, Jin; Deng, Rui; Chen, Lin

    2017-08-01

    Recently, visible light communication (VLC) based on light-emitting diodes (LEDs) is considered as a candidate technology for fifth-generation (5G) communications, VLC is free of electromagnetic interference and it can simplify the integration of VLC into heterogeneous wireless networks. Due to the data rates of VLC system limited by the low pumping efficiency, small output power and narrow modulation bandwidth, visible laser light communication (VLLC) system with laser diode (LD) has paid more attention. In addition, orthogonal frequency division multiplexing/offset quadrature amplitude modulation (OFDM/OQAM) is currently attracting attention in optical communications. Due to the non-requirement of cyclic prefix (CP) and time-frequency domain well-localized pulse shapes, it can achieve high spectral efficiency. Moreover, OFDM/OQAM has lower out-of-band power leakage so that it increases the system robustness against inter-carrier interference (ICI) and frequency offset. In this paper, a Discrete Fourier Transform (DFT)-based channel estimation scheme combined with the interference approximation method (IAM) is proposed and experimentally demonstrated for VLLC OFDM/OQAM system. The performance of VLLC OFDM/OQAM system with and without DFT-based channel estimation is investigated. Moreover, the proposed DFT-based channel estimation scheme and the intra-symbol frequency-domain averaging (ISFA)-based method are also compared for the VLLC OFDM/OQAM system. The experimental results show that, the performance of EVM using the DFT-based channel estimation scheme is improved about 3dB compared with the conventional IAM method. In addition, the DFT-based channel estimation scheme can resist the channel noise effectively than that of the ISFA-based method.

  6. Detection of main tidal frequencies using least squares harmonic estimation method

    NASA Astrophysics Data System (ADS)

    Mousavian, R.; Hossainali, M. Mashhadi

    2012-11-01

    In this paper the efficiency of the method of Least Squares Harmonic Estimation (LS-HE) for detecting the main tidal frequencies is investigated. Using this method, the tidal spectrum of the sea level data is evaluated at two tidal stations: Bandar Abbas in south of Iran and Workington on the eastern coast of the UK. The amplitudes of the tidal constituents at these two tidal stations are not the same. Moreover, in contrary to the Workington station, the Bandar Abbas tidal record is not an equispaced time series. Therefore, the analysis of the hourly tidal observations in Bandar Abbas and Workington can provide a reasonable insight into the efficiency of this method for analyzing the frequency content of tidal time series. Furthermore, applying the method of Fourier transform to the Workington tidal record provides an independent source of information for evaluating the tidal spectrum proposed by the LS-HE method. According to the obtained results, the spectrums of these two tidal records contain the components with the maximum amplitudes among the expected ones in this time span and some new frequencies in the list of known constituents. In addition, in terms of frequencies with maximum amplitude; the power spectrums derived from two aforementioned methods are the same. These results demonstrate the ability of LS-HE for identifying the frequencies with maximum amplitude in both tidal records.

  7. Estimating snow depth of alpine snowpack via airborne multifrequency passive microwave radiance observations: Colorado, USA

    NASA Astrophysics Data System (ADS)

    Kim, R. S.; Durand, M. T.; Li, D.; Baldo, E.; Margulis, S. A.; Dumont, M.; Morin, S.

    2017-12-01

    This paper presents a newly-proposed snow depth retrieval approach for mountainous deep snow using airborne multifrequency passive microwave (PM) radiance observation. In contrast to previous snow depth estimations using satellite PM radiance assimilation, the newly-proposed method utilized single flight observation and deployed the snow hydrologic models. This method is promising since the satellite-based retrieval methods have difficulties to estimate snow depth due to their coarse resolution and computational effort. Indeed, this approach consists of particle filter using combinations of multiple PM frequencies and multi-layer snow physical model (i.e., Crocus) to resolve melt-refreeze crusts. The method was performed over NASA Cold Land Processes Experiment (CLPX) area in Colorado during 2002 and 2003. Results showed that there was a significant improvement over the prior snow depth estimates and the capability to reduce the prior snow depth biases. When applying our snow depth retrieval algorithm using a combination of four PM frequencies (10.7,18.7, 37.0 and 89.0 GHz), the RMSE values were reduced by 48 % at the snow depth transects sites where forest density was less than 5% despite deep snow conditions. This method displayed a sensitivity to different combinations of frequencies, model stratigraphy (i.e. different number of layering scheme for snow physical model) and estimation methods (particle filter and Kalman filter). The prior RMSE values at the forest-covered areas were reduced by 37 - 42 % even in the presence of forest cover.

  8. Statistical methods for estimating normal blood chemistry ranges and variance in rainbow trout (Salmo gairdneri), Shasta Strain

    USGS Publications Warehouse

    Wedemeyer, Gary A.; Nelson, Nancy C.

    1975-01-01

    Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.

  9. Correcting length-frequency distributions for imperfect detection

    USGS Publications Warehouse

    Breton, André R.; Hawkins, John A.; Winkelman, Dana L.

    2013-01-01

    Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data are available.

  10. Evaluation of design flood frequency methods for Iowa streams : final report, June 2009.

    DOT National Transportation Integrated Search

    2009-06-01

    The objective of this project was to assess the predictive accuracy of flood frequency estimation for small Iowa streams based : on the Rational Method, the NRCS curve number approach, and the Iowa Runoff Chart. The evaluation was based on : comparis...

  11. Local regression type methods applied to the study of geophysics and high frequency financial data

    NASA Astrophysics Data System (ADS)

    Mariani, M. C.; Basu, K.

    2014-09-01

    In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.

  12. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    NASA Astrophysics Data System (ADS)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion article (Lenoir and Crucifix, 2018). All the methods presented in this paper are available to the reader in the Python package WAVEPAL.

  13. Using Caspar Creek flow records to test peak flow estimation methods applicable to crossing design

    Treesearch

    Peter H. Cafferata; Leslie M. Reid

    2017-01-01

    Long-term flow records from sub-watersheds in the Caspar Creek Experimental Watersheds were used to test the accuracy of four methods commonly used to estimate peak flows in small forested watersheds: the Rational Method, the updated USGS Magnitude and Frequency Method, flow transference methods, and the NRCS curve number method. Comparison of measured and calculated...

  14. Estimating haplotype frequencies by combining data from large DNA pools with database information.

    PubMed

    Gasbarra, Dario; Kulathinal, Sangita; Pirinen, Matti; Sillanpää, Mikko J

    2011-01-01

    We assume that allele frequency data have been extracted from several large DNA pools, each containing genetic material of up to hundreds of sampled individuals. Our goal is to estimate the haplotype frequencies among the sampled individuals by combining the pooled allele frequency data with prior knowledge about the set of possible haplotypes. Such prior information can be obtained, for example, from a database such as HapMap. We present a Bayesian haplotyping method for pooled DNA based on a continuous approximation of the multinomial distribution. The proposed method is applicable when the sizes of the DNA pools and/or the number of considered loci exceed the limits of several earlier methods. In the example analyses, the proposed model clearly outperforms a deterministic greedy algorithm on real data from the HapMap database. With a small number of loci, the performance of the proposed method is similar to that of an EM-algorithm, which uses a multinormal approximation for the pooled allele frequencies, but which does not utilize prior information about the haplotypes. The method has been implemented using Matlab and the code is available upon request from the authors.

  15. The Relationship between Relative Fundamental Frequency and a Kinematic Estimate of Laryngeal Stiffness in Healthy Adults

    ERIC Educational Resources Information Center

    McKenna, Victoria S.; Heller Murray, Elizabeth S.; Lien, Yu-An S.; Stepp, Cara E.

    2016-01-01

    Purpose: This study examined the relationship between the acoustic measure relative fundamental frequency (RFF) and a kinematic estimate of laryngeal stiffness. Method: Twelve healthy adults (mean age = 22.7 years, SD = 4.4; 10 women, 2 men) produced repetitions of /ifi/ while varying their vocal effort during simultaneous acoustic and video…

  16. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Genetic variants of serum butyrylcholinesterase in Chilean Mapuche Indians.

    PubMed

    Acuña, M; Eaton, L; Ramírez, N R; Cifuentes, L; Llop, E

    2003-05-01

    We estimated the frequencies of serum butyrylcholinesterase (BChE) alleles in three tribes of Mapuche Indians from southern Chile, using enzymatic methods, and we estimated the frequency of allele BCHE*K in one tribe using primer reduced restriction analysis (PCR-PIRA). The three tribes have different degrees of European admixture, which is reflected in the observed frequencies of the atypical allele BCHE*A: 1.11% in Huilliches, 0.89% in Cuncos, and 0% in Pehuenches. This result is evidence in favor of the hypothesis that BCHE*A is absent in native Amerindians. The frequencies of BCHE*F were higher than in most reported studies (3.89%, 5.78%, and 4.41%, respectively). These results are probably due to an overestimation of the frequency of allele BCHE*F, since none of the 20 BCHE UF individuals (by the enzymatic test) individuals analyzed showed either of the two DNA base substitutions associated with this allele. Although enzymatic methods rarely detect the presence of allele BCHE*K, PCR-PIRA found the allele in an appreciable frequency (5.76%), although lower than that found in other ethnic groups. Since observed frequencies of unusual alleles correspond to estimated percentages of European admixture, it is likely that none of these unusual alleles were present in Mapuche Indians before the arrival of Europeans. Copyright 2003 Wiley-Liss, Inc.

  18. Tidal frequency estimation for closed basins

    NASA Technical Reports Server (NTRS)

    Eades, J. B., Jr.

    1978-01-01

    A method was developed for determining the fundamental tidal frequencies for closed basins of water, by means of an eigenvalue analysis. The mathematical model employed, was the Laplace tidal equations.

  19. Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2003-01-01

    Regional equations for estimating 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood-peak discharges at ungaged sites on rural, unregulated streams in Ohio were developed by means of ordinary and generalized least-squares (GLS) regression techniques. One-variable, simple equations and three-variable, full-model equations were developed on the basis of selected basin characteristics and flood-frequency estimates determined for 305 streamflow-gaging stations in Ohio and adjacent states. The average standard errors of prediction ranged from about 39 to 49 percent for the simple equations, and from about 34 to 41 percent for the full-model equations. Flood-frequency estimates determined by means of log-Pearson Type III analyses are reported along with weighted flood-frequency estimates, computed as a function of the log-Pearson Type III estimates and the regression estimates. Values of explanatory variables used in the regression models were determined from digital spatial data sets by means of a geographic information system (GIS), with the exception of drainage area, which was determined by digitizing the area within basin boundaries manually delineated on topographic maps. Use of GIS-based explanatory variables represents a major departure in methodology from that described in previous reports on estimating flood-frequency characteristics of Ohio streams. Examples are presented illustrating application of the regression equations to ungaged sites on ungaged and gaged streams. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site on the same stream. A region-of-influence method, which employs a computer program to estimate flood-frequency characteristics for ungaged sites based on data from gaged sites with similar characteristics, was also tested and compared to the GLS full-model equations. For all recurrence intervals, the GLS full-model equations had superior prediction accuracy relative to the simple equations and therefore are recommended for use.

  20. An LFMCW detector with new structure and FRFT based differential distance estimation method.

    PubMed

    Yue, Kai; Hao, Xinhong; Li, Ping

    2016-01-01

    This paper describes a linear frequency modulated continuous wave (LFMCW) detector which is designed for a collision avoidance radar. This detector can estimate distance between the detector and pedestrians or vehicles, thereby it will help to reduce the likelihood of traffic accidents. The detector consists of a transceiver and a signal processor. A novel structure based on the intermediate frequency signal (IFS) is designed for the transceiver which is different from the traditional LFMCW transceiver using the beat frequency signal (BFS) based structure. In the signal processor, a novel fractional Fourier transform (FRFT) based differential distance estimation (DDE) method is used to detect the distance. The new IFS based structure is beneficial for the FRFT based DDE method to reduce the computation complexity, because it does not need the scan of the optimal FRFT order. Low computation complexity ensures the feasibility of practical applications. Simulations are carried out and results demonstrate the efficiency of the detector designed in this paper.

  1. Model estimation of claim risk and premium for motor vehicle insurance by using Bayesian method

    NASA Astrophysics Data System (ADS)

    Sukono; Riaman; Lesmana, E.; Wulandari, R.; Napitupulu, H.; Supian, S.

    2018-01-01

    Risk models need to be estimated by the insurance company in order to predict the magnitude of the claim and determine the premiums charged to the insured. This is intended to prevent losses in the future. In this paper, we discuss the estimation of risk model claims and motor vehicle insurance premiums using Bayesian methods approach. It is assumed that the frequency of claims follow a Poisson distribution, while a number of claims assumed to follow a Gamma distribution. The estimation of parameters of the distribution of the frequency and amount of claims are made by using Bayesian methods. Furthermore, the estimator distribution of frequency and amount of claims are used to estimate the aggregate risk models as well as the value of the mean and variance. The mean and variance estimator that aggregate risk, was used to predict the premium eligible to be charged to the insured. Based on the analysis results, it is shown that the frequency of claims follow a Poisson distribution with parameter values λ is 5.827. While a number of claims follow the Gamma distribution with parameter values p is 7.922 and θ is 1.414. Therefore, the obtained values of the mean and variance of the aggregate claims respectively are IDR 32,667,489.88 and IDR 38,453,900,000,000.00. In this paper the prediction of the pure premium eligible charged to the insured is obtained, which amounting to IDR 2,722,290.82. The prediction of the claims and premiums aggregate can be used as a reference for the insurance company’s decision-making in management of reserves and premiums of motor vehicle insurance.

  2. Clinical outcomes after estimated versus calculated activity of radioiodine for the treatment of hyperthyroidism: systematic review and meta-analysis.

    PubMed

    de Rooij, A; Vandenbroucke, J P; Smit, J W A; Stokkel, M P M; Dekkers, O M

    2009-11-01

    Despite the long experience with radioiodine for hyperthyroidism, controversy remains regarding the optimal method to determine the activity that is required to achieve long-term euthyroidism. To compare the effect of estimated versus calculated activity of radioiodine in hyperthyroidism. Design Systematic review and meta-analysis. We searched the databases Medline, EMBASE, Web of Science, and Cochrane Library for randomized and nonrandomized studies, comparing the effect of activity estimation methods with dosimetry for hyperthyroidism. The main outcome measure was the frequency of treatment success, defined as persistent euthyroidism after radioiodine treatment at the end of follow-up in the dose estimated and calculated dosimetry group. Furthermore, we assessed the cure rates of hyperthyroidism. Three randomized and five nonrandomized studies, comparing the effect of estimated versus calculated activity of radioiodine on clinical outcomes for the treatment of hyperthyroidism, were included. The weighted mean relative frequency of successful treatment outcome (euthyroidism) was 1.03 (95% confidence interval (CI) 0.91-1.16) for estimated versus calculated activity; the weighted mean relative frequency of cure of hyperthyroidism (eu- or hypothyroidism) was 1.03 (95% CI 0.96-1.10). Subgroup analysis showed a relative frequency of euthyroidism of 1.03 (95% CI 0.84-1.26) for Graves' disease and of 1.05 (95% CI 0.91-1.19) for toxic multinodular goiter. The two main methods used to determine the activity in the treatment of hyperthyroidism with radioiodine, estimated and calculated, resulted in an equally successful treatment outcome. However, the heterogeneity of the included studies is a strong limitation that prevents a definitive conclusion from this meta-analysis.

  3. Multi-ball and one-ball geolocation and location verification

    NASA Astrophysics Data System (ADS)

    Nelson, D. J.; Townsend, J. L.

    2017-05-01

    We present analysis methods that may be used to geolocate emitters using one or more moving receivers. While some of the methods we present may apply to a broader class of signals, our primary interest is locating and tracking ships from short pulsed transmissions, such as the maritime Automatic Identification System (AIS.) The AIS signal is difficult to process and track since the pulse duration is only 25 milliseconds, and the pulses may only be transmitted every six to ten seconds. Several fundamental problems are addressed, including demodulation of AIS/GMSK signals, verification of the emitter location, accurate frequency and delay estimation and identification of pulse trains from the same emitter. In particular, we present several new correlation methods, including cross-cross correlation that greatly improves correlation accuracy over conventional methods and cross- TDOA and cross-FDOA functions that make it possible to estimate time and frequency delay without the need of computing a two dimensional cross-ambiguity surface. By isolating pulses from the same emitter and accurately tracking the received signal frequency, we are able to accurately estimate the emitter location from the received Doppler characteristics.

  4. Detecting SNPs and estimating allele frequencies in clonal bacterial populations by sequencing pooled DNA.

    PubMed

    Holt, Kathryn E; Teo, Yik Y; Li, Heng; Nair, Satheesh; Dougan, Gordon; Wain, John; Parkhill, Julian

    2009-08-15

    Here, we present a method for estimating the frequencies of SNP alleles present within pooled samples of DNA using high-throughput short-read sequencing. The method was tested on real data from six strains of the highly monomorphic pathogen Salmonella Paratyphi A, sequenced individually and in a pool. A variety of read mapping and quality-weighting procedures were tested to determine the optimal parameters, which afforded > or =80% sensitivity of SNP detection and strong correlation with true SNP frequency at poolwide read depth of 40x, declining only slightly at read depths 20-40x. The method was implemented in Perl and relies on the opensource software Maq for read mapping and SNP calling. The Perl script is freely available from ftp://ftp.sanger.ac.uk/pub/pathogens/pools/.

  5. Three-frequency BDS precise point positioning ambiguity resolution based on raw observables

    NASA Astrophysics Data System (ADS)

    Li, Pan; Zhang, Xiaohong; Ge, Maorong; Schuh, Harald

    2018-02-01

    All BeiDou navigation satellite system (BDS) satellites are transmitting signals on three frequencies, which brings new opportunity and challenges for high-accuracy precise point positioning (PPP) with ambiguity resolution (AR). This paper proposes an effective uncalibrated phase delay (UPD) estimation and AR strategy which is based on a raw PPP model. First, triple-frequency raw PPP models are developed. The observation model and stochastic model are designed and extended to accommodate the third frequency. Then, the UPD is parameterized in raw frequency form while estimating with the high-precision and low-noise integer linear combination of float ambiguity which are derived by ambiguity decorrelation. Third, with UPD corrected, the LAMBDA method is used for resolving full or partial ambiguities which can be fixed. This method can be easily and flexibly extended for dual-, triple- or even more frequency. To verify the effectiveness and performance of triple-frequency PPP AR, tests with real BDS data from 90 stations lasting for 21 days were performed in static mode. Data were processed with three strategies: BDS triple-frequency ambiguity-float PPP, BDS triple-frequency PPP with dual-frequency (B1/B2) and three-frequency AR, respectively. Numerous experiment results showed that compared with the ambiguity-float solution, the performance in terms of convergence time and positioning biases can be significantly improved by AR. Among three groups of solutions, the triple-frequency PPP AR achieved the best performance. Compared with dual-frequency AR, additional the third frequency could apparently improve the position estimations during the initialization phase and under constraint environments when the dual-frequency PPP AR is limited by few satellite numbers.

  6. Improved dichotomous search frequency offset estimator for burst-mode continuous phase modulation

    NASA Astrophysics Data System (ADS)

    Zhai, Wen-Chao; Li, Zan; Si, Jiang-Bo; Bai, Jun

    2015-11-01

    A data-aided technique for carrier frequency offset estimation with continuous phase modulation (CPM) in burst-mode transmission is presented. The proposed technique first exploits a special pilot sequence, or training sequence, to form a sinusoidal waveform. Then, an improved dichotomous search frequency offset estimator is introduced to determine the frequency offset using the sinusoid. Theoretical analysis and simulation results indicate that our estimator is noteworthy in the following aspects. First, the estimator can operate independently of timing recovery. Second, it has relatively low outlier, i.e., the minimum signal-to-noise ratio (SNR) required to guarantee estimation accuracy. Finally, the most important property is that our estimator is complexity-reduced compared to the existing dichotomous search methods: it eliminates the need for fast Fourier transform (FFT) and modulation removal, and exhibits faster convergence rate without accuracy degradation. Project supported by the National Natural Science Foundation of China (Grant No. 61301179), the Doctorial Programs Foundation of the Ministry of Education, China (Grant No. 20110203110011), and the Programme of Introducing Talents of Discipline to Universities, China (Grant No. B08038).

  7. On protecting the planet against cosmic attack: Ultrafast real-time estimate of the asteroid's radial velocity

    NASA Astrophysics Data System (ADS)

    Zakharchenko, V. D.; Kovalenko, I. G.

    2014-05-01

    A new method for the line-of-sight velocity estimation of a high-speed near-Earth object (asteroid, meteorite) is suggested. The method is based on the use of fractional, one-half order derivative of a Doppler signal. The algorithm suggested is much simpler and more economical than the classical one, and it appears preferable for use in orbital weapon systems of threat response. Application of fractional differentiation to quick evaluation of mean frequency location of the reflected Doppler signal is justified. The method allows an assessment of the mean frequency in the time domain without spectral analysis. An algorithm structure for the real-time estimation is presented. The velocity resolution estimates are made for typical asteroids in the X-band. It is shown that the wait time can be shortened by orders of magnitude compared with similar value in the case of a standard spectral processing.

  8. Towards estimation of respiratory muscle effort with respiratory inductance plethysmography signals and complementary ensemble empirical mode decomposition.

    PubMed

    Chen, Ya-Chen; Hsiao, Tzu-Chien

    2018-07-01

    Respiratory inductance plethysmography (RIP) sensor is an inexpensive, non-invasive, easy-to-use transducer for collecting respiratory movement data. Studies have reported that the RIP signal's amplitude and frequency can be used to discriminate respiratory diseases. However, with the conventional approach of RIP data analysis, respiratory muscle effort cannot be estimated. In this paper, the estimation of the respiratory muscle effort through RIP signal was proposed. A complementary ensemble empirical mode decomposition method was used, to extract hidden signals from the RIP signals based on the frequency bands of the activities of different respiratory muscles. To validate the proposed method, an experiment to collect subjects' RIP signal under thoracic breathing (TB) and abdominal breathing (AB) was conducted. The experimental results for both the TB and AB indicate that the proposed method can be used to loosely estimate the activities of thoracic muscles, abdominal muscles, and diaphragm. Graphical abstract ᅟ.

  9. Effects of Multipath and Oversampling on Navigation Using Orthogonal Frequency Division Multiplexed Signals of Opportunity

    DTIC Science & Technology

    2008-03-01

    for military use. The L2 carrier frequency operates at 1227.6 MHz and transmits only the precise code . Each satellite transmits a unique pseudo ...random noise (PRN) code by which it is identified. GPS receivers require a LOS to four satellite signals to accurately estimate a position in three...receiver frequency errors, noise addition, and multipath ef- fects. He also developed four methods for estimating the cross- correlation peak within a sampled

  10. Evaluation of design flood estimates with respect to sample size

    NASA Astrophysics Data System (ADS)

    Kobierska, Florian; Engeland, Kolbjorn

    2016-04-01

    Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.

  11. Full-order observer for direct torque control of induction motor based on constant V/F control technique.

    PubMed

    Pimkumwong, Narongrit; Wang, Ming-Shyan

    2018-02-01

    This paper presents another control method for the three-phase induction motor that is direct torque control based on constant voltage per frequency control technique. This method uses the magnitude of stator flux and torque errors to generate the stator voltage and phase angle references for controlling the induction motor by using constant voltage per frequency control method. Instead of hysteresis comparators and optimum switching table, the PI controllers and space vector modulation technique are used to reduce torque and stator-flux ripples and achieve constant switching frequency. Moreover, the coordinate transformations are not required. To implement this control method, a full-order observer is used to estimate stator flux and overcome the problems from drift and saturation in using pure integrator. The feedback gains are designed by simple manner to improve the convergence of stator flux estimation, especially in low speed range. Furthermore, the necessary conditions to maintain the stability for feedback gain design are introduced. The simulation and experimental results show accurate and stable operation of the introduced estimator and good dynamic response of the proposed control method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A joint tracking method for NSCC based on WLS algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Ruidan; Xu, Ying; Yuan, Hong

    2017-12-01

    Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.

  13. An estimation of the frequency of precursor cells which generate cytotoxic lymphocytes

    PubMed Central

    1976-01-01

    The cell-mediated immune response has been generated in vitro with a polyacrylamide culture system which allows the segregation of foci (clones?) of cytotoxic lymphocytes. Using the method of limiting dilutions, the frequency of precursor cells in CBA spleen cells able to generate a cytotoxic response against DBA mastocytoma is estimated at 1 per 1,700 cells. PMID:1083894

  14. Radiation impedance of condenser microphones and their diffuse-field responses.

    PubMed

    Barrera-Figueroa, Salvador; Rasmussen, Knud; Jacobsen, Finn

    2010-04-01

    The relation between the diffuse-field response and the radiation impedance of a microphone has been investigated. Such a relation can be derived from classical theory. The practical measurement of the radiation impedance requires (a) measuring the volume velocity of the membrane of the microphone and (b) measuring the pressure on the membrane of the microphone. The first measurement is carried out by means of laser vibrometry. The second measurement cannot be implemented in practice. However, the pressure on the membrane can be calculated numerically by means of the boundary element method. In this way, a hybrid estimate of the radiation impedance is obtained. The resulting estimate of the diffuse-field response is compared with experimental estimates of the diffuse-field response determined using reciprocity and the random-incidence method. The different estimates are in good agreement at frequencies below the resonance frequency of the microphone. Although the method may not be of great practical utility, it provides a useful validation of the estimates obtained by other means.

  15. Regional flood-frequency relations for streams with many years of no flow

    USGS Publications Warehouse

    Hjalmarson, Hjalmar W.; Thomas, Blakemore E.; ,

    1990-01-01

    In the southwestern United States, flood-frequency relations for streams that drain small arid basins are difficult to estimate, largely because of the extreme temporal and spatial variability of floods and the many years of no flow. A method is proposed that is based on the station-year method. The new method produces regional flood-frequency relations using all available annual peak-discharge data. The prediction errors for the relations are directly assessed using randomly selected subsamples of the annual peak discharges.

  16. Rounding Behavior in the Reporting of Headache Frequency Complicates Headache Chronification Research

    PubMed Central

    Houle, Timothy T.; Turner, Dana P.; Houle, Thomas A.; Smitherman, Todd A.; Martin, Vincent; Penzien, Donald B.; Lipton, Richard B.

    2013-01-01

    Objectives To characterize the extent of measurement error arising from rounding in headache frequency reporting (days per month) in a population sample of headache sufferers. Background When reporting numerical health information, individuals tend to round their estimates. The tendency to round to the nearest 5 days when reporting headache frequency can distort distributions and engender unreliability in frequency estimates in both clinical and research contexts. Methods This secondary analysis of the 2005 American Migraine Prevalence and Prevention study (AMPP) survey characterized the population distribution of 30-day headache frequency among community headache sufferers and determined the extent of numerical rounding (“heaping”) in self-reported data. Headache frequency distributions (days per month) were examined using a simplified version of Wang and Heitjan’s (2008) approach to heaping to estimate the probability that headache sufferers round to a multiple of 5 when providing frequency reports. Multiple imputation was used to estimate a theoretical “true” headache frequency. Results Of the 24,000 surveys, headache frequency data were available for 15,976 respondents diagnosed with migraine (68.6%), probable migraine (8.3%), or episodic tension-type headache (10.0%); the remainder had other headache types. The mean number of headaches days/month was 3.7 (SD = 5.6). Examination of the distribution of headache frequency reports revealed a disproportionate number of responses centered on multiples of 5 days. The odds that headache frequency was rounded to 5 increased by 24% with each one-day increase in headache frequency (OR: 1.24, 95% CI: 1.23 to 1.25), indicating that heaping occurs most commonly at higher headache frequencies. Women were more likely to round than men, and rounding decreased with increasing age and increased with symptoms of depression. Conclusions Because of the coarsening induced by rounding, caution should be used when distinguishing between episodic and chronic headache sufferers using self-reported estimates of headache frequency. Unreliability in frequency estimates is of particular concern among individuals with high-frequency (chronic) headache. Employing shorter recall intervals when assessing headache frequency, preferably using daily diaries, may improve accuracy and allow more precise estimation of chronic migraine onset and remission. PMID:23721238

  17. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    NASA Astrophysics Data System (ADS)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  18. Wave-field decay rate estimate from the wavenumber-frequency spectra

    NASA Astrophysics Data System (ADS)

    Comisel, H.; Narita, Y.; Voros, Z.

    2017-12-01

    Observational data for wave or turbulent fields are conveniently analyzed and interpreted in the Fourier domain spanning the frequencies and the wavenumbers. If a wave field has not only oscillatory components (characterized by real parts of frequency) but also temporally decaying components (characterized by imaginary parts of frequency), the energy spectrum shows a frequency broadening around the peak due to the imaginary parts of frequency (or the decay rate). The mechanism of the frequency broadening is the same as that of the Breit-Wigner spectrum in nuclear resonance phenomena. We show that the decay rate can observationally and directly be estimated once multi-point data are available, and apply the method to Cluster four-point magnetometer data in the solar wind on a spatial scale of about 1000 km. The estimated decay rate is larger than the eddy turnover time, indicating that the decay profile of solar wind turbulence is more plasma physical such as excitation of whistler waves and other modes rather than hydrodynamic turbulence behavior.

  19. Size Estimation of Groups at High Risk of HIV/AIDS using Network Scale Up in Kerman, Iran

    PubMed Central

    Shokoohi, Mostafa; Baneshi, Mohammad Reza; Haghdoost, Ali-Akbar

    2012-01-01

    Objective: To estimate the size of groups at high risk of HIV, Network Scale UP (NSU), an indirect method, was used. Methods: 500 Kermanian male aged 18 to 45 were recruited. 8 groups at high risk of HIV were defined: Users of opium, unknown drug, ecstasy, and alcohol; intra-venous drug users (IDUs; males who have extra-marital sex with females (MSF); male who have sex with female sex workers (MFSW); and male who have sex with other male (MSMs). We asked respondents whether they know anybody (probability method), and if yes, how many people (frequency method) in our target groups. Results: Estimates derived in the probability method were higher than the frequency method. Based on the probability method, 13.7% (95% CI: 11.3%, 16.1%) of males used alcohol at least once in last year; the corresponding percent for opium was 13.1% (95% CI: 10.9%, 15.3%). In addition, 12% has extra-marital sex in last year (95% CI: 10%, 14%); while 7% (95% CI: 5.8%, 8.2%) had sex with a female sex worker. Conclusion: We showed that drug use is more common among young and mid-age males; although their sexual contacts were also considerable. These percentages show that special preventive program is needed to control an HIV transmission. Estimates derived from probability method were comparable with data from external sources. The underestimation in frequency method might be due to the fact that respondents are not aware of sensitive characteristics of all those in their network and underreporting is likely to occur. PMID:22891148

  20. Model-based frequency response characterization of a digital-image analysis system for epifluorescence microscopy

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Viles, Charles L.; Park, Stephen K.; Reichenbach, Stephen E.; Sieracki, Michael E.

    1992-01-01

    Consideration is given to a model-based method for estimating the spatial frequency response of a digital-imaging system (e.g., a CCD camera) that is modeled as a linear, shift-invariant image acquisition subsystem that is cascaded with a linear, shift-variant sampling subsystem. The method characterizes the 2D frequency response of the image acquisition subsystem to beyond the Nyquist frequency by accounting explicitly for insufficient sampling and the sample-scene phase. Results for simulated systems and a real CCD-based epifluorescence microscopy system are presented to demonstrate the accuracy of the method.

  1. Regional regression equations to estimate peak-flow frequency at sites in North Dakota using data through 2009

    USGS Publications Warehouse

    Williams-Sether, Tara

    2015-08-06

    Annual peak-flow frequency data from 231 U.S. Geological Survey streamflow-gaging stations in North Dakota and parts of Montana, South Dakota, and Minnesota, with 10 or more years of unregulated peak-flow record, were used to develop regional regression equations for exceedance probabilities of 0.5, 0.20, 0.10, 0.04, 0.02, 0.01, and 0.002 using generalized least-squares techniques. Updated peak-flow frequency estimates for 262 streamflow-gaging stations were developed using data through 2009 and log-Pearson Type III procedures outlined by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data. An average generalized skew coefficient was determined for three hydrologic zones in North Dakota. A StreamStats web application was developed to estimate basin characteristics for the regional regression equation analysis. Methods for estimating a weighted peak-flow frequency for gaged sites and ungaged sites are presented.

  2. Spectrum-averaged Harmonic Path (SHAPA) algorithm for non-contact vital sign monitoring with ultra-wideband (UWB) radar.

    PubMed

    Van Nguyen; Javaid, Abdul Q; Weitnauer, Mary Ann

    2014-01-01

    We introduce the Spectrum-averaged Harmonic Path (SHAPA) algorithm for estimation of heart rate (HR) and respiration rate (RR) with Impulse Radio Ultrawideband (IR-UWB) radar. Periodic movement of human torso caused by respiration and heart beat induces fundamental frequencies and their harmonics at the respiration and heart rates. IR-UWB enables capture of these spectral components and frequency domain processing enables a low cost implementation. Most existing methods of identifying the fundamental component either in frequency or time domain to estimate the HR and/or RR lead to significant error if the fundamental is distorted or cancelled by interference. The SHAPA algorithm (1) takes advantage of the HR harmonics, where there is less interference, and (2) exploits the information in previous spectra to achieve more reliable and robust estimation of the fundamental frequency in the spectrum under consideration. Example experimental results for HR estimation demonstrate how our algorithm eliminates errors caused by interference and produces 16% to 60% more valid estimates.

  3. A Study on Regional Rainfall Frequency Analysis for Flood Simulation Scenarios

    NASA Astrophysics Data System (ADS)

    Jung, Younghun; Ahn, Hyunjun; Joo, Kyungwon; Heo, Jun-Haeng

    2014-05-01

    Recently, climate change has been observed in Korea as well as in the entire world. The rainstorm has been gradually increased and then the damage has been grown. It is very important to manage the flood control facilities because of increasing the frequency and magnitude of severe rain storm. For managing flood control facilities in risky regions, data sets such as elevation, gradient, channel, land use and soil data should be filed up. Using this information, the disaster situations can be simulated to secure evacuation routes for various rainfall scenarios. The aim of this study is to investigate and determine extreme rainfall quantile estimates in Uijeongbu City using index flood method with L-moments parameter estimation. Regional frequency analysis trades space for time by using annual maximum rainfall data from nearby or similar sites to derive estimates for any given site in a homogeneous region. Regional frequency analysis based on pooled data is recommended for estimation of rainfall quantiles at sites with record lengths less than 5T, where T is return period of interest. Many variables relevant to precipitation can be used for grouping a region in regional frequency analysis. For regionalization of Han River basin, the k-means method is applied for grouping regions by variables of meteorology and geomorphology. The results from the k-means method are compared for each region using various probability distributions. In the final step of the regionalization analysis, goodness-of-fit measure is used to evaluate the accuracy of a set of candidate distributions. And rainfall quantiles by index flood method are obtained based on the appropriate distribution. And then, rainfall quantiles based on various scenarios are used as input data for disaster simulations. Keywords: Regional Frequency Analysis; Scenarios of Rainfall Quantile Acknowledgements This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.

  4. Intakes of culinary herbs and spices from a food frequency questionnaire evaluated against 28-days estimated records.

    PubMed

    Carlsen, Monica H; Blomhoff, Rune; Andersen, Lene F

    2011-05-16

    Worldwide, herbs and spices are much used food flavourings. However, little data exist regarding actual dietary intake of culinary herbs and spices. We developed a food frequency questionnaire (FFQ) for the assessment of habitual diet the preceding year, with focus on phytochemical rich food, including herbs and spices. The aim of the present study was to evaluate the intakes of herbs and spices from the FFQ with estimates of intake from another dietary assessment method. Thus we compared the intake estimates from the FFQ with 28 days of estimated records of herb and spice consumption as a reference method. The evaluation study was conducted among 146 free living adults, who filled in the FFQ and 2-4 weeks later carried out 28 days recording of herb and spice consumption. The FFQ included a section with questions about 27 individual culinary herbs and spices, while the records were open ended records for recording of herbs and spice consumption exclusively. Our study showed that the FFQ obtained slightly higher estimates of total intake of herbs and spices than the total intake assessed by the Herbs and Spice Records (HSR). The correlation between the two assessment methods with regard to total intake was good (r = 0.5), and the cross-classification suggests that the FFQ may be used to classify subjects according to total herb and spice intake. For the 8 most frequently consumed individual herbs and spices, the FFQ obtained good estimates of median frequency of intake for 2 herbs/spices, while good estimates of portion sizes were obtained for 4 out of 8 herbs/spices. Our results suggested that the FFQ was able to give good estimates of frequency of intake and portion sizes on group level for several of the most frequently used herbs and spices. The FFQ was only able to fairly rank subjects according to frequency of intake of the 8 most frequently consumed herbs and spices. Other studies are warranted to further explore the intakes of culinary spices and herbs.

  5. Methods for estimating peak-flow frequencies at ungaged sites in Montana based on data through water year 2011: Chapter F in Montana StreamStats

    USGS Publications Warehouse

    Sando, Roy; Sando, Steven K.; McCarthy, Peter M.; Dutton, DeAnn M.

    2016-04-05

    The U.S. Geological Survey (USGS), in cooperation with the Montana Department of Natural Resources and Conservation, completed a study to update methods for estimating peak-flow frequencies at ungaged sites in Montana based on peak-flow data at streamflow-gaging stations through water year 2011. The methods allow estimation of peak-flow frequencies (that is, peak-flow magnitudes, in cubic feet per second, associated with annual exceedance probabilities of 66.7, 50, 42.9, 20, 10, 4, 2, 1, 0.5, and 0.2 percent) at ungaged sites. The annual exceedance probabilities correspond to 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Regional regression analysis is a primary focus of Chapter F of this Scientific Investigations Report, and regression equations for estimating peak-flow frequencies at ungaged sites in eight hydrologic regions in Montana are presented. The regression equations are based on analysis of peak-flow frequencies and basin characteristics at 537 streamflow-gaging stations in or near Montana and were developed using generalized least squares regression or weighted least squares regression.All of the data used in calculating basin characteristics that were included as explanatory variables in the regression equations were developed for and are available through the USGS StreamStats application (http://water.usgs.gov/osw/streamstats/) for Montana. StreamStats is a Web-based geographic information system application that was created by the USGS to provide users with access to an assortment of analytical tools that are useful for water-resource planning and management. The primary purpose of the Montana StreamStats application is to provide estimates of basin characteristics and streamflow characteristics for user-selected ungaged sites on Montana streams. The regional regression equations presented in this report chapter can be conveniently solved using the Montana StreamStats application.Selected results from this study were compared with results of previous studies. For most hydrologic regions, the regression equations reported for this study had lower mean standard errors of prediction (in percent) than the previously reported regression equations for Montana. The equations presented for this study are considered to be an improvement on the previously reported equations primarily because this study (1) included 13 more years of peak-flow data; (2) included 35 more streamflow-gaging stations than previous studies; (3) used a detailed geographic information system (GIS)-based definition of the regulation status of streamflow-gaging stations, which allowed better determination of the unregulated peak-flow records that are appropriate for use in the regional regression analysis; (4) included advancements in GIS and remote-sensing technologies, which allowed more convenient calculation of basin characteristics and investigation of many more candidate basin characteristics; and (5) included advancements in computational and analytical methods, which allowed more thorough and consistent data analysis.This report chapter also presents other methods for estimating peak-flow frequencies at ungaged sites. Two methods for estimating peak-flow frequencies at ungaged sites located on the same streams as streamflow-gaging stations are described. Additionally, envelope curves relating maximum recorded annual peak flows to contributing drainage area for each of the eight hydrologic regions in Montana are presented and compared to a national envelope curve. In addition to providing general information on characteristics of large peak flows, the regional envelope curves can be used to assess the reasonableness of peak-flow frequency estimates determined using the regression equations.

  6. Time delay estimation using new spectral and adaptive filtering methods with applications to underwater target detection

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammed A.

    1997-11-01

    In this dissertation, we present several novel approaches for detection and identification of targets of arbitrary shapes from the acoustic backscattered data and using the incident waveform. This problem is formulated as time- delay estimation and sinusoidal frequency estimation problems which both have applications in many other important areas in signal processing. Solving time-delay estimation problem allows the identification of the specular components in the backscattered signal from elastic and non-elastic targets. Thus, accurate estimation of these time delays would help in determining the existence of certain clues for detecting targets. Several new methods for solving these two problems in the time, frequency and wavelet domains are developed. In the time domain, a new block fast transversal filter (BFTF) is proposed for a fast implementation of the least squares (LS) method. This BFTF algorithm is derived by using data-related constrained block-LS cost function to guarantee global optimality. The new soft-constrained algorithm provides an efficient way of transferring weight information between blocks of data and thus it is computationally very efficient compared with other LS- based schemes. Additionally, the tracking ability of the algorithm can be controlled by varying the block length and/or a soft constrained parameter. The effectiveness of this algorithm is tested on several underwater acoustic backscattered data for elastic targets and non-elastic (cement chunk) objects. In the frequency domain, the time-delay estimation problem is converted to a sinusoidal frequency estimation problem by using the discrete Fourier transform. Then, the lagged sample covariance matrices of the resulting signal are computed and studied in terms of their eigen- structure. These matrices are shown to be robust and effective in extracting bases for the signal and noise subspaces. New MUSIC and matrix pencil-based methods are derived these subspaces. The effectiveness of the method is demonstrated on the problem of detection of multiple specular components in the acoustic backscattered data. Finally, a method for the estimation of time delays using wavelet decomposition is derived. The sub-band adaptive filtering uses discrete wavelet transform for multi- resolution or sub-band decomposition. Joint time delay estimation for identifying multi-specular components and subsequent adaptive filtering processes are performed on the signal in each sub-band. This would provide multiple 'look' of the signal at different resolution scale which results in more accurate estimates for delays associated with the specular components. Simulation results on the simulated and real shallow water data are provided which show the promise of this new scheme for target detection in a heavy cluttered environment.

  7. A Method for Estimating Noise from Full-Scale Distributed Exhaust Nozzles

    NASA Technical Reports Server (NTRS)

    Kinzie, Kevin W.; Schein, David B.

    2004-01-01

    A method to estimate the full-scale noise suppression from a scale model distributed exhaust nozzle (DEN) is presented. For a conventional scale model exhaust nozzle, Strouhal number scaling using a scale factor related to the nozzle exit area is typically applied that shifts model scale frequency in proportion to the geometric scale factor. However, model scale DEN designs have two inherent length scales. One is associated with the mini-nozzles, whose size do not change in going from model scale to full scale. The other is associated with the overall nozzle exit area which is much smaller than full size. Consequently, lower frequency energy that is generated by the coalesced jet plume should scale to lower frequency, but higher frequency energy generated by individual mini-jets does not shift frequency. In addition, jet-jet acoustic shielding by the array of mini-nozzles is a significant noise reduction effect that may change with DEN model size. A technique has been developed to scale laboratory model spectral data based on the premise that high and low frequency content must be treated differently during the scaling process. The model-scale distributed exhaust spectra are divided into low and high frequency regions that are then adjusted to full scale separately based on different physics-based scaling laws. The regions are then recombined to create an estimate of the full-scale acoustic spectra. These spectra can then be converted to perceived noise levels (PNL). The paper presents the details of this methodology and provides an example of the estimated noise suppression by a distributed exhaust nozzle compared to a round conic nozzle.

  8. Analysis of Coherent Phonon Signals by Sparsity-promoting Dynamic Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Murata, Shin; Aihara, Shingo; Tokuda, Satoru; Iwamitsu, Kazunori; Mizoguchi, Kohji; Akai, Ichiro; Okada, Masato

    2018-05-01

    We propose a method to decompose normal modes in a coherent phonon (CP) signal by sparsity-promoting dynamic mode decomposition. While the CP signals can be modeled as the sum of finite number of damped oscillators, the conventional method such as Fourier transform adopts continuous bases in a frequency domain. Thus, the uncertainty of frequency appears and it is difficult to estimate the initial phase. Moreover, measurement artifacts are imposed on the CP signal and deforms the Fourier spectrum. In contrast, the proposed method can separate the signal from the artifact precisely and can successfully estimate physical properties of the normal modes.

  9. An Evaluation of Different Target Enrichment Methods in Pooled Sequencing Designs for Complex Disease Association Studies

    PubMed Central

    Day-Williams, Aaron G.; McLay, Kirsten; Drury, Eleanor; Edkins, Sarah; Coffey, Alison J.; Palotie, Aarno; Zeggini, Eleftheria

    2011-01-01

    Pooled sequencing can be a cost-effective approach to disease variant discovery, but its applicability in association studies remains unclear. We compare sequence enrichment methods coupled to next-generation sequencing in non-indexed pools of 1, 2, 10, 20 and 50 individuals and assess their ability to discover variants and to estimate their allele frequencies. We find that pooled resequencing is most usefully applied as a variant discovery tool due to limitations in estimating allele frequency with high enough accuracy for association studies, and that in-solution hybrid-capture performs best among the enrichment methods examined regardless of pool size. PMID:22069447

  10. Hawaii StreamStats; a web application for defining drainage-basin characteristics and estimating peak-streamflow statistics

    USGS Publications Warehouse

    Rosa, Sarah N.; Oki, Delwyn S.

    2010-01-01

    Reliable estimates of the magnitude and frequency of floods are necessary for the safe and efficient design of roads, bridges, water-conveyance structures, and flood-control projects and for the management of flood plains and flood-prone areas. StreamStats provides a simple, fast, and reproducible method to define drainage-basin characteristics and estimate the frequency and magnitude of peak discharges in Hawaii?s streams using recently developed regional regression equations. StreamStats allows the user to estimate the magnitude of floods for streams where data from stream-gaging stations do not exist. Existing estimates of the magnitude and frequency of peak discharges in Hawaii can be improved with continued operation of existing stream-gaging stations and installation of additional gaging stations for areas where limited stream-gaging data are available.

  11. Nonparametric methods for drought severity estimation at ungauged sites

    NASA Astrophysics Data System (ADS)

    Sadri, S.; Burn, D. H.

    2012-12-01

    The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.

  12. Time-frequency peak filtering for random noise attenuation of magnetic resonance sounding signal

    NASA Astrophysics Data System (ADS)

    Lin, Tingting; Zhang, Yang; Yi, Xiaofeng; Fan, Tiehu; Wan, Ling

    2018-05-01

    When measuring in a geomagnetic field, the method of magnetic resonance sounding (MRS) is often limited because of the notably low signal-to-noise ratio (SNR). Most current studies focus on discarding spiky noise and power-line harmonic noise cancellation. However, the effects of random noise should not be underestimated. The common method for random noise attenuation is stacking, but collecting multiple recordings merely to suppress random noise is time-consuming. Moreover, stacking is insufficient to suppress high-level random noise. Here, we propose the use of time-frequency peak filtering for random noise attenuation, which is performed after the traditional de-spiking and power-line harmonic removal method. By encoding the noisy signal with frequency modulation and estimating the instantaneous frequency using the peak of the time-frequency representation of the encoded signal, the desired MRS signal can be acquired from only one stack. The performance of the proposed method is tested on synthetic envelope signals and field data from different surveys. Good estimations of the signal parameters are obtained at different SNRs. Moreover, an attempt to use the proposed method to handle a single recording provides better results compared to 16 stacks. Our results suggest that the number of stacks can be appropriately reduced to shorten the measurement time and improve the measurement efficiency.

  13. Application of Model Based Parameter Estimation for Fast Frequency Response Calculations of Input Characteristics of Cavity-Backed Aperture Antennas Using Hybrid FEM/MoM Technique

    NASA Technical Reports Server (NTRS)

    Reddy C. J.

    1998-01-01

    Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.

  14. Improvement of Microtremor Data Filtering and Processing Methods Used in Determining the Fundamental Frequency of Urban Areas

    NASA Astrophysics Data System (ADS)

    Mousavi Anzehaee, Mohammad; Adib, Ahmad; Heydarzadeh, Kobra

    2015-10-01

    The manner of microtremor data collection and filtering operation and also the method used for processing have a considerable effect on the accuracy of estimation of dynamic soil parameters. In this paper, running variance method was used to improve the automatic detection of data sections infected by local perturbations. In this method, the microtremor data running variance is computed using a sliding window. Then the obtained signal is used to remove the ranges of data affected by perturbations from the original data. Additionally, to determinate the fundamental frequency of a site, this study has proposed a statistical characteristics-based method. Actually, statistical characteristics, such as the probability density graph and the average and the standard deviation of all the frequencies corresponding to the maximum peaks in the H/ V spectra of all data windows, are used to differentiate the real peaks from the false peaks resulting from perturbations. The methods have been applied to the data recorded for the City of Meybod in central Iran. Experimental results show that the applied methods are able to successfully reduce the effects of extensive local perturbations on microtremor data and eventually to estimate the fundamental frequency more accurately compared to other common methods.

  15. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  16. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    PubMed

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    PubMed Central

    Pan, Jin; Ma, Boyuan

    2018-01-01

    This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323

  18. Assessment of the Performance of a Dual-Frequency Surface Reference Technique

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Liao, Liang; Tanelli, Simone; Durden, Stephen

    2013-01-01

    The high correlation of the rain-free surface cross sections at two frequencies implies that the estimate of differential path integrated attenuation (PIA) caused by precipitation along the radar beam can be obtained to a higher degree of accuracy than the path-attenuation at either frequency. We explore this finding first analytically and then by examining data from the JPL dual-frequency airborne radar using measurements from the TC4 experiment obtained during July-August 2007. Despite this improvement in the accuracy of the differential path attenuation, solving the constrained dual-wavelength radar equations for parameters of the particle size distribution requires not only this quantity but the single-wavelength path attenuation as well. We investigate a simple method of estimating the single-frequency path attenuation from the differential attenuation and compare this with the estimate derived directly from the surface return.

  19. Fried food intake estimated by the multiple source method is associated with gestational weight gain.

    PubMed

    Sartorelli, Daniela S; Barbieri, Patrícia; Perdoná, Gleici C S

    2014-08-01

    This present study aimed to test the association between fried food intake estimated by a semiquantitative food frequency questionnaire (FFQ), multiple 24-hour dietary recalls (24hRs), and the application of the multiple source method (MSM) in relation to gestational weight gain at the second and third trimesters and weight gain ratio (observed weight gain/expected weight gain). We hypothesized that distinct relationships with weight gain would be found given the measurement errors of self-reported dietary approaches. A prospective study was conducted with 88 adult pregnant women. Fried food intake during pregnancy was assessed using a validated 85-item FFQ, two to six 24hRs per woman, and the MSM with and without frequency of food intake as covariate. Linear regression models were used to evaluate the relationship between fried food estimated by the methods and weight gain. For every 100-g increment of fried food intake, the β (95% confidence interval) for weight gain was β 1.87 (0.34, 3.40) and β 2.00 0.55, 3.45) for estimates using MSM with and without the frequency of intake as covariate, respectively, after multiple adjustments. We found that fried food intake estimated by the FFQ and 24hRs β 0.40 (-0.68, 1.48) and β 0.49 (-0.53, 1.52), respectively, was unrelated to weight gain. In relation to weight gain ratio, a positive association was found for estimates using the MSM with [β 0.29 (0.03, 0.54)] and without the frequency of intake as covariate [β 0.31 (0.07, 0.55)]; and no associations were found for estimates by the FFQ or 24hRs. The data showed that fried food intake estimated the MSM, but not by the FFQ and 24hRs, is associated with excessive weight gain during pregnancy. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Use of historical information in extreme surge frequency estimation: case of the marine flooding on the La Rochelle site in France

    NASA Astrophysics Data System (ADS)

    Hamdi, Y.; Bardet, L.; Duluc, C.-M.; Rebour, V.

    2014-09-01

    Nuclear power plants located in the French Atlantic coast are designed to be protected against extreme environmental conditions. The French authorities remain cautious by adopting a strict policy of nuclear plants flood prevention. Although coastal nuclear facilities in France are designed to very low probabilities of failure (e.g. 1000 year surge), exceptional surges (outliers induced by exceptional climatic events) had shown that the extreme sea levels estimated with the current statistical approaches could be underestimated. The estimation of extreme surges then requires the use of a statistical analysis approach having a more solid theoretical motivation. This paper deals with extreme surge frequency estimation using historical information (HI) about events occurred before the systematic record period. It also contributes to addressing the problem of the presence of outliers in data sets. The frequency models presented in the present paper have been quite successful in the field of hydrometeorology and river flooding but they have not been applied to sea levels data sets to prevent marine flooding. In this work, we suggest two methods of incorporating the HI: the Peaks-Over-Threshold method with HI (POTH) and the Block Maxima method with HI (BMH). Two kinds of historical data can be used in the POTH method: classical Historical Maxima (HMax) data, and Over a Threshold Supplementary (OTS) data. In both cases, the data are structured in historical periods and can be used only as complement to the main systematic data. On the other hand, in the BMH method, the basic hypothesis in statistical modeling of HI is that at least one threshold of perception exists for the whole period (historical and systematic) and that during a giving historical period preceding the period of tide gauging, only information about surges above this threshold have been recorded or archived. The two frequency models were applied to a case study from France, at the La Rochelle site where the storm Xynthia induced an outlier, to illustrate their potentials, to compare their performances and especially to analyze the impact of the use of HI on the extreme surge frequency estimation.

  1. Use of historical information in extreme-surge frequency estimation: the case of marine flooding on the La Rochelle site in France

    NASA Astrophysics Data System (ADS)

    Hamdi, Y.; Bardet, L.; Duluc, C.-M.; Rebour, V.

    2015-07-01

    Nuclear power plants located in the French Atlantic coast are designed to be protected against extreme environmental conditions. The French authorities remain cautious by adopting a strict policy of nuclear-plants flood prevention. Although coastal nuclear facilities in France are designed to very low probabilities of failure (e.g., 1000-year surge), exceptional surges (outliers induced by exceptional climatic events) have shown that the extreme sea levels estimated with the current statistical approaches could be underestimated. The estimation of extreme surges then requires the use of a statistical analysis approach having a more solid theoretical motivation. This paper deals with extreme-surge frequency estimation using historical information (HI) about events occurred before the systematic record period. It also contributes to addressing the problem of the presence of outliers in data sets. The frequency models presented in the present paper have been quite successful in the field of hydrometeorology and river flooding but they have not been applied to sea level data sets to prevent marine flooding. In this work, we suggest two methods of incorporating the HI: the peaks-over-threshold method with HI (POTH) and the block maxima method with HI (BMH). Two kinds of historical data can be used in the POTH method: classical historical maxima (HMax) data, and over-a-threshold supplementary (OTS) data. In both cases, the data are structured in historical periods and can be used only as complement to the main systematic data. On the other hand, in the BMH method, the basic hypothesis in statistical modeling of HI is that at least one threshold of perception exists for the whole period (historical and systematic) and that during a giving historical period preceding the period of tide gauging, only information about surges above this threshold have been recorded or archived. The two frequency models were applied to a case study from France, at the La Rochelle site where the storm Xynthia induced an outlier, to illustrate their potentials, to compare their performances and especially to analyze the impact of the use of HI on the extreme-surge frequency estimation.

  2. Estimating and comparing microbial diversity in the presence of sequencing errors

    PubMed Central

    Chiu, Chun-Huo

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872

  3. Electrical and magnetic properties of rock and soil

    USGS Publications Warehouse

    Scott, J.H.

    1983-01-01

    Field and laboratory measurements have been made to determine the electrical conductivity, dielectric constant, and magnetic permeability of rock and soil in areas of interest in studies of electromagnetic pulse propagation. Conductivity is determined by making field measurements of apparent resisitivity at very low frequencies (0-20 cps), and interpreting the true resistivity of layers at various depths by curve-matching methods. Interpreted resistivity values are converted to corresponding conductivity values which are assumed to be applicable at 10^2 cps, an assumption which is considered valid because the conductivity of rock and soil is nearly constant at frequencies below 10^2 cps. Conductivity is estimated at higher frequencies (up to 10^6 cps) by using statistical correlations of three parameters obtained from laboratory measurements of rock and soil samples: conductivity at 10^2 cps, frequency and conductivity measured over the range 10^2 to 10^6 cps. Conductivity may also be estimated in this frequency range by using field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and conductivity measured over the range 10^2 to 10^6 cps. This method is less accurate because nonrandom variation of ion concentration in natural pore water introduces error. Dielectric constant is estimated in a similar manner from field-derived conductivity values applicable at 10^2 cps and statistical correlations of three parameters obtained from laboratory measurements of samples: conductivity measured at 10^2 cps, frequency, and dielectric constant measured over the frequency range 10^2 to 10^6 cps. Dielectric constant may also be estimated from field measurements of water content and correlations of laboratory sample measurements of the three parameters: water content, frequency, and dielectric constant measured from 10^2 to 10^6 cps, but again, this method is less accurate because of variation of ion concentration of pore water. Special laboratory procedures are used to measure conductivity and dielectric constant of rock and soil samples. Electrode polarization errors are minimized by using an electrode system that is electrochemically reversible-with ions in pore water.

  4. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    PubMed Central

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  5. Moving target detection for frequency agility radar by sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi

    2016-09-01

    Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.

  6. Moving target detection for frequency agility radar by sparse reconstruction.

    PubMed

    Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi

    2016-09-01

    Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.

  7. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    PubMed Central

    Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  8. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    PubMed

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-04-18

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  9. A comparative study of clock rate and drift estimation

    NASA Technical Reports Server (NTRS)

    Breakiron, Lee A.

    1994-01-01

    Five different methods of drift determination and four different methods of rate determination were compared using months of hourly phase and frequency data from a sample of cesium clocks and active hydrogen masers. Linear least squares on frequency is selected as the optimal method of determining both drift and rate, more on the basis of parameter parsimony and confidence measures than on random and systematic errors.

  10. Methods for determining magnitude and frequency of floods in California, based on data through water year 2006

    USGS Publications Warehouse

    Gotvald, Anthony J.; Barth, Nancy A.; Veilleux, Andrea G.; Parrett, Charles

    2012-01-01

    Methods for estimating the magnitude and frequency of floods in California that are not substantially affected by regulation or diversions have been updated. Annual peak-flow data through water year 2006 were analyzed for 771 streamflow-gaging stations (streamgages) in California having 10 or more years of data. Flood-frequency estimates were computed for the streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Low-outlier and historic information were incorporated into the flood-frequency analysis, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low outliers. Special methods for fitting the distribution were developed for streamgages in the desert region in southeastern California. Additionally, basin characteristics for the streamgages were computed by using a geographical information system. Regional regression analysis, using generalized least squares regression, was used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins in California that are outside of the southeastern desert region. Flood-frequency estimates and basin characteristics for 630 streamgages were combined to form the final database used in the regional regression analysis. Five hydrologic regions were developed for the area of California outside of the desert region. The final regional regression equations are functions of drainage area and mean annual precipitation for four of the five regions. In one region, the Sierra Nevada region, the final equations are functions of drainage area, mean basin elevation, and mean annual precipitation. Average standard errors of prediction for the regression equations in all five regions range from 42.7 to 161.9 percent. For the desert region of California, an analysis of 33 streamgages was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the log-Pearson Type III distribution. The regional estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final regional regression equations are functions of drainage area. Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent. Annual peak-flow data through water year 2006 were analyzed for eight streamgages in California having 10 or more years of data considered to be affected by urbanization. Flood-frequency estimates were computed for the urban streamgages by fitting a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Regression analysis could not be used to develop flood-frequency estimation equations for urban streams because of the limited number of sites. Flood-frequency estimates for the eight urban sites were graphically compared to flood-frequency estimates for 630 non-urban sites. The regression equations developed from this study will be incorporated into the U.S. Geological Survey (USGS) StreamStats program. The StreamStats program is a Web-based application that provides streamflow statistics and basin characteristics for USGS streamgages and ungaged sites of interest. StreamStats can also compute basin characteristics and provide estimates of streamflow statistics for ungaged sites when users select the location of a site along any stream in California.

  11. Non-stationary signal analysis based on general parameterized time-frequency transform and its application in the feature extraction of a rotary machine

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Peng, Zhike; Chen, Shiqian; Yang, Yang; Zhang, Wenming

    2018-06-01

    With the development of large rotary machines for faster and more integrated performance, the condition monitoring and fault diagnosis for them are becoming more challenging. Since the time-frequency (TF) pattern of the vibration signal from the rotary machine often contains condition information and fault feature, the methods based on TF analysis have been widely-used to solve these two problems in the industrial community. This article introduces an effective non-stationary signal analysis method based on the general parameterized time-frequency transform (GPTFT). The GPTFT is achieved by inserting a rotation operator and a shift operator in the short-time Fourier transform. This method can produce a high-concentrated TF pattern with a general kernel. A multi-component instantaneous frequency (IF) extraction method is proposed based on it. The estimation for the IF of every component is accomplished by defining a spectrum concentration index (SCI). Moreover, such an IF estimation process is iteratively operated until all the components are extracted. The tests on three simulation examples and a real vibration signal demonstrate the effectiveness and superiority of our method.

  12. Adaptive model reduction for continuous systems via recursive rational interpolation

    NASA Technical Reports Server (NTRS)

    Lilly, John H.

    1994-01-01

    A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.

  13. Techniques for estimating the magnitude and frequency of floods in rural basins of South Carolina, 1999

    USGS Publications Warehouse

    Feaster, Toby D.; Tasker, Gary D.

    2002-01-01

    Data from 167 streamflow-gaging stations in or near South Carolina with 10 or more years of record through September 30, 1999, were used to develop two methods for estimating the magnitude and frequency of floods in South Carolina for rural ungaged basins that are not significantly affected by regulation. Flood frequency estimates for 54 gaged sites in South Carolina were computed by fitting the water-year peak flows for each site to a log-Pearson Type III distribution. As part of the computation of flood-frequency estimates for gaged sites, new values for generalized skew coefficients were developed. Flood-frequency analyses also were made for gaging stations that drain basins from more than one physiographic province. The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, updated these data from previous flood-frequency reports to aid officials who are active in floodplain management as well as those who design bridges, culverts, and levees, or other structures near streams where flooding is likely to occur. Regional regression analysis, using generalized least squares regression, was used to develop a set of predictive equations that can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for rural ungaged basins in the Blue Ridge, Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The predictive equations are all functions of drainage area. Average errors of prediction for these regression equations ranged from -16 to 19 percent for the 2-year recurrence-interval flow in the upper Coastal Plain to -34 to 52 percent for the 500-year recurrence interval flow in the lower Coastal Plain. A region-of-influence method also was developed that interactively estimates recurrence- interval flows for rural ungaged basins in the Blue Ridge of South Carolina. The region-of-influence method uses regression techniques to develop a unique relation between flow and basin characteristics for an individual watershed. This, then, can be used to estimate flows at ungaged sites. Because the computations required for this method are somewhat complex, a computer application was developed that performs the computations and compares the predictive errors for this method. The computer application includes the option of using the region-of-influence method, or the generalized least squares regression equations from this report to compute estimated flows and errors of prediction specific to each ungaged site. From a comparison of predictive errors using the region-of-influence method with those computed using the regional regression method, the region-of-influence method performed systematically better only in the Blue Ridge and is, therefore, not recommended for use in the other physiographic provinces. Peak-flow data for the South Carolina stations used in the regionalization study are provided in appendix A, which contains gaging station information, log-Pearson Type III statistics, information on stage-flow relations, and water-year peak stages and flows. For informational purposes, water-year peak-flow data for stations on regulated streams in South Carolina also are provided in appendix D. Other information pertaining to the regulated streams is provided in the text of the report.

  14. Estimating integrated variance in the presence of microstructure noise using linear regression

    NASA Astrophysics Data System (ADS)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  15. Overcoming the winner's curse: estimating penetrance parameters from case-control data.

    PubMed

    Zollner, Sebastian; Pritchard, Jonathan K

    2007-04-01

    Genomewide association studies are now a widely used approach in the search for loci that affect complex traits. After detection of significant association, estimates of penetrance and allele-frequency parameters for the associated variant indicate the importance of that variant and facilitate the planning of replication studies. However, when these estimates are based on the original data used to detect the variant, the results are affected by an ascertainment bias known as the "winner's curse." The actual genetic effect is typically smaller than its estimate. This overestimation of the genetic effect may cause replication studies to fail because the necessary sample size is underestimated. Here, we present an approach that corrects for the ascertainment bias and generates an estimate of the frequency of a variant and its penetrance parameters. The method produces a point estimate and confidence region for the parameter estimates. We study the performance of this method using simulated data sets and show that it is possible to greatly reduce the bias in the parameter estimates, even when the original association study had low power. The uncertainty of the estimate decreases with increasing sample size, independent of the power of the original test for association. Finally, we show that application of the method to case-control data can improve the design of replication studies considerably.

  16. Crack identification method in beam-like structures using changes in experimentally measured frequencies and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Khatir, Samir; Dekemele, Kevin; Loccufier, Mia; Khatir, Tawfiq; Abdel Wahab, Magd

    2018-02-01

    In this paper, a technique is presented for the detection and localization of an open crack in beam-like structures using experimentally measured natural frequencies and the Particle Swarm Optimization (PSO) method. The technique considers the variation in local flexibility near the crack. The natural frequencies of a cracked beam are determined experimentally and numerically using the Finite Element Method (FEM). The optimization algorithm is programmed in MATLAB. The algorithm is used to estimate the location and severity of a crack by minimizing the differences between measured and calculated frequencies. The method is verified using experimentally measured data on a cantilever steel beam. The Fourier transform is adopted to improve the frequency resolution. The results demonstrate the good accuracy of the proposed technique.

  17. The instantaneous frequency rate spectrogram

    NASA Astrophysics Data System (ADS)

    Czarnecki, Krzysztof

    2016-01-01

    An accelerogram of the instantaneous phase of signal components referred to as an instantaneous frequency rate spectrogram (IFRS) is presented as a joint time-frequency distribution. The distribution is directly obtained by processing the short-time Fourier transform (STFT) locally. A novel approach to amplitude demodulation based upon the reassignment method is introduced as a useful by-product. Additionally, an estimator of energy density versus the instantaneous frequency rate (IFR) is proposed and referred to as the IFR profile. The energy density is estimated based upon both the classical energy spectrogram and the IFRS smoothened by the median filter. Moreover, the impact of an analyzing window width, additive white Gaussian noise and observation time is tested. Finally, the introduced method is used for the analysis of the acoustic emission of an automotive engine. The recording of the engine of a Lamborghini Gallardo is analyzed as an example.

  18. Precession missile feature extraction using sparse component analysis of radar measurements

    NASA Astrophysics Data System (ADS)

    Liu, Lihua; Du, Xiaoyong; Ghogho, Mounir; Hu, Weidong; McLernon, Des

    2012-12-01

    According to the working mode of the ballistic missile warning radar (BMWR), the radar return from the BMWR is usually sparse. To recognize and identify the warhead, it is necessary to extract the precession frequency and the locations of the scattering centers of the missile. This article first analyzes the radar signal model of the precessing conical missile during flight and develops the sparse dictionary which is parameterized by the unknown precession frequency. Based on the sparse dictionary, the sparse signal model is then established. A nonlinear least square estimation is first applied to roughly extract the precession frequency in the sparse dictionary. Based on the time segmented radar signal, a sparse component analysis method using the orthogonal matching pursuit algorithm is then proposed to jointly estimate the precession frequency and the scattering centers of the missile. Simulation results illustrate the validity of the proposed method.

  19. Aircraft Fault Detection Using Real-Time Frequency Response Estimation

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2016-01-01

    A real-time method for estimating time-varying aircraft frequency responses from input and output measurements was demonstrated. The Bat-4 subscale airplane was used with NASA Langley Research Center's AirSTAR unmanned aerial flight test facility to conduct flight tests and collect data for dynamic modeling. Orthogonal phase-optimized multisine inputs, summed with pilot stick and pedal inputs, were used to excite the responses. The aircraft was tested in its normal configuration and with emulated failures, which included a stuck left ruddervator and an increased command path latency. No prior knowledge of a dynamic model was used or available for the estimation. The longitudinal short period dynamics were investigated in this work. Time-varying frequency responses and stability margins were tracked well using a 20 second sliding window of data, as compared to a post-flight analysis using output error parameter estimation and a low-order equivalent system model. This method could be used in a real-time fault detection system, or for other applications of dynamic modeling such as real-time verification of stability margins during envelope expansion tests.

  20. Semiblind channel estimation for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Sheng; Song, Jyu-Han

    2012-12-01

    This article proposes a semiblind channel estimation method for multiple-input multiple-output orthogonal frequency-division multiplexing systems based on circular precoding. Relying on the precoding scheme at the transmitters, the autocorrelation matrix of the received data induces a structure relating the outer product of the channel frequency response matrix and precoding coefficients. This structure makes it possible to extract information about channel product matrices, which can be used to form a Hermitian matrix whose positive eigenvalues and corresponding eigenvectors yield the channel impulse response matrix. This article also tests the resistance of the precoding design to finite-sample estimation errors, and explores the effects of the precoding scheme on channel equalization by performing pairwise error probability analysis. The proposed method is immune to channel zero locations, and is reasonably robust to channel order overestimation. The proposed method is applicable to the scenarios in which the number of transmitters exceeds that of the receivers. Simulation results demonstrate the performance of the proposed method and compare it with some existing methods.

  1. Estimation of seismic attenuation in carbonate rocks using three different methods: Application on VSP data from Abu Dhabi oilfield

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.; Matsushima, J.

    2016-06-01

    In this study a relationship between the seismic wavelength and the scale of heterogeneity in the propagating medium has been examined. The relationship estimates the size of heterogeneity that significantly affects the wave propagation at a specific frequency, and enables a decrease in the calculation time of wave scattering estimation. The relationship was applied in analyzing synthetic and Vertical Seismic Profiling (VSP) data obtained from an onshore oilfield in the Emirate of Abu Dhabi, United Arab Emirates. Prior to estimation of the attenuation, a robust processing workflow was applied to both synthetic and recorded data to increase the Signal-to-Noise Ratio (SNR). Two conventional methods of spectral ratio and centroid frequency shift methods were applied to estimate the attenuation from the extracted seismic waveforms in addition to a new method based on seismic interferometry. The attenuation profiles derived from the three approaches demonstrated similar variation, however the interferometry method resulted in greater depth resolution, differences in attenuation magnitude. Furthermore, the attenuation profiles revealed significant contribution of scattering on seismic wave attenuation. The results obtained from the seismic interferometry method revealed estimated scattering attenuation ranges from 0 to 0.1 and estimated intrinsic attenuation can reach 0.2. The subsurface of the studied zones is known to be highly porous and permeable, which suggest that the mechanism of the intrinsic attenuation is probably the interactions between pore fluids and solids.

  2. On impedance measurement of reinforced concrete on the surface for estimate of corroded rebar

    NASA Astrophysics Data System (ADS)

    Sasamoto, Akira; Yu, Jun; Harada, Yoshihisa; Iwata, Masahiro; Noguchi, Kazuhiro

    2017-04-01

    In an estimate of health monitoring for reinforced concrete, corrosion degree of rebar is important parameter but is not easy to be estimated by non destructive testing. A few test method such as half cell method or polarization resistance method could be a 'perfect' nondestructive method if luckily having had wired connection to rebar without destructing target concrete. In this presentation it is reported the experimental result that an impedance measurement on surface of reinforced concretes is able to distinguish corroded rebar from healthy rebar. The contact electrode on concrete surface are simple structure made of urethane sponge and needle. Impedance measurement are carried out with frequency response analyzer with frequency range from 0.01Hz to 1MHz, typical amplitude of imposed voltage are 10 volt. We made concrete specimens under two different corrosion process. One process(pre corrosion) has rebars corroded by electrolysis in salty water before concrete casting and another process (post corrosion) has concrete specimens being corroded during the curing. The results of application of developed method to these corroded specimens show the method is useful to estimate corrosion level of rebars.

  3. Carrier Estimation Using Classic Spectral Estimation Techniques for the Proposed Demand Assignment Multiple Access Service

    NASA Technical Reports Server (NTRS)

    Scaife, Bradley James

    1999-01-01

    In any satellite communication, the Doppler shift associated with the satellite's position and velocity must be calculated in order to determine the carrier frequency. If the satellite state vector is unknown then some estimate must be formed of the Doppler-shifted carrier frequency. One elementary technique is to examine the signal spectrum and base the estimate on the dominant spectral component. If, however, the carrier is spread (as in most satellite communications) this technique may fail unless the chip rate-to-data rate ratio (processing gain) associated with the carrier is small. In this case, there may be enough spectral energy to allow peak detection against a noise background. In this thesis, we present a method to estimate the frequency (without knowledge of the Doppler shift) of a spread-spectrum carrier assuming a small processing gain and binary-phase shift keying (BPSK) modulation. Our method relies on an averaged discrete Fourier transform along with peak detection on spectral match filtered data. We provide theory and simulation results indicating the accuracy of this method. In addition, we will describe an all-digital hardware design based around a Motorola DSP56303 and high-speed A/D which implements this technique in real-time. The hardware design is to be used in NMSU's implementation of NASA's demand assignment, multiple access (DAMA) service.

  4. Methods for estimating magnitude and frequency of peak flows for natural streams in Utah

    USGS Publications Warehouse

    Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.

    2007-01-01

    Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.

  5. A new approach for continuous estimation of baseflow using discrete water quality data: Method description and comparison with baseflow estimates from two existing approaches

    USGS Publications Warehouse

    Miller, Matthew P.; Johnson, Henry M.; Susong, David D.; Wolock, David M.

    2015-01-01

    Understanding how watershed characteristics and climate influence the baseflow component of stream discharge is a topic of interest to both the scientific and water management communities. Therefore, the development of baseflow estimation methods is a topic of active research. Previous studies have demonstrated that graphical hydrograph separation (GHS) and conductivity mass balance (CMB) methods can be applied to stream discharge data to estimate daily baseflow. While CMB is generally considered to be a more objective approach than GHS, its application across broad spatial scales is limited by a lack of high frequency specific conductance (SC) data. We propose a new method that uses discrete SC data, which are widely available, to estimate baseflow at a daily time step using the CMB method. The proposed approach involves the development of regression models that relate discrete SC concentrations to stream discharge and time. Regression-derived CMB baseflow estimates were more similar to baseflow estimates obtained using a CMB approach with measured high frequency SC data than were the GHS baseflow estimates at twelve snowmelt dominated streams and rivers. There was a near perfect fit between the regression-derived and measured CMB baseflow estimates at sites where the regression models were able to accurately predict daily SC concentrations. We propose that the regression-derived approach could be applied to estimate baseflow at large numbers of sites, thereby enabling future investigations of watershed and climatic characteristics that influence the baseflow component of stream discharge across large spatial scales.

  6. An Evaluation Method of Words Tendency Depending on Time-Series Variation and Its Improvements.

    ERIC Educational Resources Information Center

    Atlam, El-Sayed; Okada, Makoto; Shishibori, Masami; Aoe, Jun-ichi

    2002-01-01

    Discussion of word frequency and keywords in text focuses on a method to estimate automatically the stability classes that indicate a word's popularity with time-series variations based on the frequency change in past electronic text data. Compares the evaluation of decision tree stability class results with manual classification results.…

  7. Towards a systematic approach to comparing distributions used in flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Bobée, B.; Cavadias, G.; Ashkar, F.; Bernier, J.; Rasmussen, P.

    1993-02-01

    The estimation of flood quantiles from available streamflow records has been a topic of extensive research in this century. However, the large number of distributions and estimation methods proposed in the scientific literature has led to a state of confusion, and a gap prevails between theory and practice. This concerns both at-site and regional flood frequency estimation. To facilitate the work of "hydrologists, designers of hydraulic structures, irrigation engineers and planners of water resources", the World Meteorological Organization recently published a report which surveys and compares current methodologies, and recommends a number of statistical distributions and estimation procedures. This report is an important step towards the clarification of this difficult topic, but we think that it does not effectively satisfy the needs of practitioners as intended, because it contains some statements which are not statistically justified and which require further discussion. In the present paper we review commonly used procedures for flood frequency estimation, point out some of the reasons for the present state of confusion concerning the advantages and disadvantages of the various methods, and propose the broad lines of a possible comparison strategy. We recommend that the results of such comparisons be discussed in an international forum of experts, with the purpose of attaining a more coherent and broadly accepted strategy for estimating floods.

  8. Time-Frequency Distribution of Seismocardiographic Signals: A Comparative Study

    PubMed Central

    Taebi, Amirtaha; Mansy, Hansen A.

    2017-01-01

    Accurate estimation of seismocardiographic (SCG) signal features can help successful signal characterization and classification in health and disease. This may lead to new methods for diagnosing and monitoring heart function. Time-frequency distributions (TFD) were often used to estimate the spectrotemporal signal features. In this study, the performance of different TFDs (e.g., short-time Fourier transform (STFT), polynomial chirplet transform (PCT), and continuous wavelet transform (CWT) with different mother functions) was assessed using simulated signals, and then utilized to analyze actual SCGs. The instantaneous frequency (IF) was determined from TFD and the error in estimating IF was calculated for simulated signals. Results suggested that the lowest IF error depended on the TFD and the test signal. STFT had lower error than CWT methods for most test signals. For a simulated SCG, Morlet CWT more accurately estimated IF than other CWTs, but Morlet did not provide noticeable advantages over STFT or PCT. PCT had the most consistently accurate IF estimations and appeared more suited for estimating IF of actual SCG signals. PCT analysis showed that actual SCGs from eight healthy subjects had multiple spectral peaks at 9.20 ± 0.48, 25.84 ± 0.77, 50.71 ± 1.83 Hz (mean ± SEM). These may prove useful features for SCG characterization and classification. PMID:28952511

  9. Modal vector estimation for closely spaced frequency modes

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chung, Y. T.; Blair, M.

    1982-01-01

    Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.

  10. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.

    PubMed

    Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan

    2016-09-24

    This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.

  11. A physically based analytical model of flood frequency curves

    NASA Astrophysics Data System (ADS)

    Basso, S.; Schirmer, M.; Botter, G.

    2016-09-01

    Predicting magnitude and frequency of floods is a key issue in hydrology, with implications in many fields ranging from river science and geomorphology to the insurance industry. In this paper, a novel physically based approach is proposed to estimate the recurrence intervals of seasonal flow maxima. The method links the extremal distribution of streamflows to the stochastic dynamics of daily discharge, providing an analytical expression of the seasonal flood frequency curve. The parameters involved in the formulation embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which is linked to the antecedent wetness condition in the watershed, needs to be calibrated on the observed maxima. The performance of the method is discussed through a set of applications in four rivers featuring heterogeneous daily flow regimes. The model provides reliable estimates of seasonal maximum flows in different climatic settings and is able to capture diverse shapes of flood frequency curves emerging in erratic and persistent flow regimes. The proposed method exploits experimental information on the full range of discharges experienced by rivers. As a consequence, model performances do not deteriorate when the magnitude of events with return times longer than the available sample size is estimated. The approach provides a framework for the prediction of floods based on short data series of rainfall and daily streamflows that may be especially valuable in data scarce regions of the world.

  12. Adaptive Kalman filter based on variance component estimation for the prediction of ionospheric delay in aiding the cycle slip repair of GNSS triple-frequency signals

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Xu, Tianhe; Yao, Yifei; Wang, Qianxin

    2018-01-01

    In order to incorporate the time smoothness of ionospheric delay to aid the cycle slip detection, an adaptive Kalman filter is developed based on variance component estimation. The correlations between measurements at neighboring epochs are fully considered in developing a filtering algorithm for colored measurement noise. Within this filtering framework, epoch-differenced ionospheric delays are predicted. Using this prediction, the potential cycle slips are repaired for triple-frequency signals of global navigation satellite systems. Cycle slips are repaired in a stepwise manner; i.e., for two extra wide lane combinations firstly and then for the third frequency. In the estimation for the third frequency, a stochastic model is followed in which the correlations between the ionospheric delay prediction errors and the errors in the epoch-differenced phase measurements are considered. The implementing details of the proposed method are tabulated. A real BeiDou Navigation Satellite System data set is used to check the performance of the proposed method. Most cycle slips, no matter trivial or nontrivial, can be estimated in float values with satisfactorily high accuracy and their integer values can hence be correctly obtained by simple rounding. To be more specific, all manually introduced nontrivial cycle slips are correctly repaired.

  13. The Ecological Genetics of Introduced Populations of the Giant Toad BUFO MARINUS. II. Effective Population Size

    PubMed Central

    Easteal, Simon

    1985-01-01

    The allele frequencies are described at ten polymorphic enzyme loci (of a total of 22 loci sampled) in 15 populations of the neotropical giant toad, Bufo marinus, introduced to Hawaii and Australia in the 1930s. The history of establishment of the ten populations is described and used as a framework for the analysis of allele frequency variances. The variances are used to determine the effective sizes of the populations. The estimates obtained (390 and 346) are reasonably precise, homogeneous between localities and much smaller than estimates of neighborhood size obtained previously using ecological methods. This discrepancy is discussed, and it is concluded that the estimates obtained here using genetic methods are the more reliable. PMID:3922852

  14. The use of the multiwavelet transform for the estimation of surface wave group and phase velocities and their associated uncertainties

    NASA Astrophysics Data System (ADS)

    Poppeliers, C.; Preston, L. A.

    2017-12-01

    Measurements of seismic surface wave dispersion can be used to infer the structure of the Earth's subsurface. Typically, to identify group- and phase-velocity, a series of narrow-band filters are applied to surface wave seismograms. Frequency dependent arrival times of surface waves can then be identified from the resulting suite of narrow band seismograms. The frequency-dependent velocity estimates are then inverted for subsurface velocity structure. However, this technique has no method to estimate the uncertainty of the measured surface wave velocities, and subsequently there is no estimate of uncertainty on, for example, tomographic results. For the work here, we explore using the multiwavelet transform (MWT) as an alternate method to estimate surface wave speeds. The MWT decomposes a signal similarly to the conventional filter bank technique, but with two primary advantages: 1) the time-frequency localization is optimized in regard to the time-frequency tradeoff, and 2) we can use the MWT to estimate the uncertainty of the resulting surface wave group- and phase-velocities. The uncertainties of the surface wave speed measurements can then be propagated into tomographic inversions to provide uncertainties of resolved Earth structure. As proof-of-concept, we apply our technique to four seismic ambient noise correlograms that were collected from the University of Nevada Reno seismic network near the Nevada National Security Site. We invert the estimated group- and phase-velocities, as well the uncertainties, for 1-D Earth structure for each station pair. These preliminary results generally agree with 1-D velocities that are obtained from inverting dispersion curves estimated from a conventional Gaussian filter bank.

  15. Determination of low-frequency normal modes and structure coefficients using optimal sequence stacking method and autoregressive method in frequency domain

    NASA Astrophysics Data System (ADS)

    Majstorovic, J.; Rosat, S.; Lambotte, S.; Rogister, Y. J. G.

    2017-12-01

    Although there are numerous studies about 3D density Earth model, building an accurate one is still an engaging challenge. One procedure to refine global 3D Earth density models is based on unambiguous measurements of Earth's normal mode eigenfrequencies. To have unbiased eigenfrequency measurements one needs to deal with a variety of time records quality and especially different noise sources, while standard approaches usually include signal processing methods such as Fourier transform. Here we present estimate of complex eigenfrequencies and structure coefficients for several modes below 1 mHz (0S2, 2S1, etc.). Our analysis is performed in three steps. The first step includes the use of stacking methods to enhance specific modes of interest above the observed noise level. Out of three trials the optimal sequence estimation turned out to be the foremost compared to the spherical harmonic stacking method and receiver strip method. In the second step we apply an autoregressive method in the frequency domain to estimate complex eigenfrequencies of target modes. In the third step we apply the phasor walkout method to test and confirm our eigenfrequencies. Before conducting an analysis of time records, we evaluate how the station distribution and noise levels impact the estimate of eigenfrequencies and structure coefficients by using synthetic seismograms calculated for a 3D realistic Earth model, which includes Earth's ellipticity and lateral heterogeneity. Synthetic seismograms are computed by means of normal mode summation using self-coupling and cross-coupling of modes up to 1 mHz. Eventually, the methods tested on synthetic data are applied to long-period seismometer and superconducting gravimeter data recorded after six mega-earthquakes of magnitude greater than 8.3. Hence, we propose new estimates of structure coefficients dependent on the density variations.

  16. Improved Goldstein Interferogram Filter Based on Local Fringe Frequency Estimation.

    PubMed

    Feng, Qingqing; Xu, Huaping; Wu, Zhefeng; You, Yanan; Liu, Wei; Ge, Shiqi

    2016-11-23

    The quality of an interferogram, which is limited by various phase noise, will greatly affect the further processes of InSAR, such as phase unwrapping. Interferometric SAR (InSAR) geophysical measurements', such as height or displacement, phase filtering is therefore an essential step. In this work, an improved Goldstein interferogram filter is proposed to suppress the phase noise while preserving the fringe edges. First, the proposed adaptive filter step, performed before frequency estimation, is employed to improve the estimation accuracy. Subsequently, to preserve the fringe characteristics, the estimated fringe frequency in each fixed filtering patch is removed from the original noisy phase. Then, the residual phase is smoothed based on the modified Goldstein filter with its parameter alpha dependent on both the coherence map and the residual phase frequency. Finally, the filtered residual phase and the removed fringe frequency are combined to generate the filtered interferogram, with the loss of signal minimized while reducing the noise level. The effectiveness of the proposed method is verified by experimental results based on both simulated and real data.

  17. Improved Goldstein Interferogram Filter Based on Local Fringe Frequency Estimation

    PubMed Central

    Feng, Qingqing; Xu, Huaping; Wu, Zhefeng; You, Yanan; Liu, Wei; Ge, Shiqi

    2016-01-01

    The quality of an interferogram, which is limited by various phase noise, will greatly affect the further processes of InSAR, such as phase unwrapping. Interferometric SAR (InSAR) geophysical measurements’, such as height or displacement, phase filtering is therefore an essential step. In this work, an improved Goldstein interferogram filter is proposed to suppress the phase noise while preserving the fringe edges. First, the proposed adaptive filter step, performed before frequency estimation, is employed to improve the estimation accuracy. Subsequently, to preserve the fringe characteristics, the estimated fringe frequency in each fixed filtering patch is removed from the original noisy phase. Then, the residual phase is smoothed based on the modified Goldstein filter with its parameter alpha dependent on both the coherence map and the residual phase frequency. Finally, the filtered residual phase and the removed fringe frequency are combined to generate the filtered interferogram, with the loss of signal minimized while reducing the noise level. The effectiveness of the proposed method is verified by experimental results based on both simulated and real data. PMID:27886081

  18. Determining XV-15 aeroelastic modes from flight data with frequency-domain methods

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.; Tischler, Mark B.

    1993-01-01

    The XV-15 tilt-rotor wing has six major aeroelastic modes that are close in frequency. To precisely excite individual modes during flight test, dual flaperon exciters with automatic frequency-sweep controls were installed. The resulting structural data were analyzed in the frequency domain (Fourier transformed). All spectral data were computed using chirp z-transforms. Modal frequencies and damping were determined by fitting curves to frequency-response magnitude and phase data. The results given in this report are for the XV-15 with its original metal rotor blades. Also, frequency and damping values are compared with theoretical predictions made using two different programs, CAMRAD and ASAP. The frequency-domain data-analysis method proved to be very reliable and adequate for tracking aeroelastic modes during flight-envelope expansion. This approach required less flight-test time and yielded mode estimations that were more repeatable, compared with the exponential-decay method previously used.

  19. Proof-of-principle Experiment of a Ferroelectric Tuner for the 1.3 GHz Cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi,E.M.; Hahn, H.; Shchelkunov, S. V.

    2009-01-01

    A novel tuner has been developed by the Omega-P company to achieve fast control of the accelerator RF cavity frequency. The tuner is based on the ferroelectric property which has a variable dielectric constant as function of applied voltage. Tests using a Brookhaven National Laboratory (BNL) 1.3 GHz electron gun cavity have been carried out for a proof-of-principle experiment of the ferroelectric tuner. Two different methods were used to determine the frequency change achieved with the ferroelectric tuner (FT). The first method is based on a S11 measurement at the tuner port to find the reactive impedance change when themore » voltage is applied. The reactive impedance change then is used to estimate the cavity frequency shift. The second method is a direct S21 measurement of the frequency shift in the cavity with the tuner connected. The estimated frequency change from the reactive impedance measurement due to 5 kV is in the range between 3.2 kHz and 14 kHz, while 9 kHz is the result from the direct measurement. The two methods are in reasonable agreement. The detail description of the experiment and the analysis are discussed in the paper.« less

  20. Time Series Decomposition into Oscillation Components and Phase Estimation.

    PubMed

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  1. A method for estimating vertical distibution of the SAGE II opaque cloud frequency

    NASA Technical Reports Server (NTRS)

    Wang, Pi-Huan; Mccormick, M. Patrick; Minnis, Patrick; Kent, Geoffrey S.; Yue, Glenn K.; Skeens, Kristi M.

    1995-01-01

    A method is developed to infer the vertical distribution of the occurrence frequency of clouds that are opaque to the Stratospheric Aerosol and Gas Experiment (SAGE) II instrument. An application of the method to the 1986 SAGE II observations is included in this paper. The 1986 SAGE II results are compared with the 1952-1981 cloud climatology of Warren et al. (1986, 1988)

  2. A comparison of three approaches to non-stationary flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Debele, S. E.; Strupczewski, W. G.; Bogdanowicz, E.

    2017-08-01

    Non-stationary flood frequency analysis (FFA) is applied to statistical analysis of seasonal flow maxima from Polish and Norwegian catchments. Three non-stationary estimation methods, namely, maximum likelihood (ML), two stage (WLS/TS) and GAMLSS (generalized additive model for location, scale and shape parameters), are compared in the context of capturing the effect of non-stationarity on the estimation of time-dependent moments and design quantiles. The use of a multimodel approach is recommended, to reduce the errors due to the model misspecification in the magnitude of quantiles. The results of calculations based on observed seasonal daily flow maxima and computer simulation experiments showed that GAMLSS gave the best results with respect to the relative bias and root mean square error in the estimates of trend in the standard deviation and the constant shape parameter, while WLS/TS provided better accuracy in the estimates of trend in the mean value. Within three compared methods the WLS/TS method is recommended to deal with non-stationarity in short time series. Some practical aspects of the GAMLSS package application are also presented. The detailed discussion of general issues related to consequences of climate change in the FFA is presented in the second part of the article entitled "Around and about an application of the GAMLSS package in non-stationary flood frequency analysis".

  3. Modal identification of structures by a novel approach based on FDD-wavelet method

    NASA Astrophysics Data System (ADS)

    Tarinejad, Reza; Damadipour, Majid

    2014-02-01

    An important application of system identification in structural dynamics is the determination of natural frequencies, mode shapes and damping ratios during operation which can then be used for calibrating numerical models. In this paper, the combination of two advanced methods of Operational Modal Analysis (OMA) called Frequency Domain Decomposition (FDD) and Continuous Wavelet Transform (CWT) based on novel cyclic averaging of correlation functions (CACF) technique are used for identification of dynamic properties. By using this technique, the autocorrelation of averaged correlation functions is used instead of original signals. Integration of FDD and CWT methods is used to overcome their deficiency and take advantage of the unique capabilities of these methods. The FDD method is able to accurately estimate the natural frequencies and mode shapes of structures in the frequency domain. On the other hand, the CWT method is in the time-frequency domain for decomposition of a signal at different frequencies and determines the damping coefficients. In this paper, a new formulation applied to the wavelet transform of the averaged correlation function of an ambient response is proposed. This application causes to accurate estimation of damping ratios from weak (noise) or strong (earthquake) vibrations and long or short duration record. For this purpose, the modified Morlet wavelet having two free parameters is used. The optimum values of these two parameters are obtained by employing a technique which minimizes the entropy of the wavelet coefficients matrix. The capabilities of the novel FDD-Wavelet method in the system identification of various dynamic systems with regular or irregular distribution of mass and stiffness are illustrated. This combined approach is superior to classic methods and yields results that agree well with the exact solutions of the numerical models.

  4. Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation

    NASA Astrophysics Data System (ADS)

    Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou

    2018-06-01

    Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.

  5. Waveform inversion in the frequency domain for the simultaneous determination of earthquake source mechanism and moment function

    NASA Astrophysics Data System (ADS)

    Nakano, M.; Kumagai, H.; Inoue, H.

    2008-06-01

    We propose a method of waveform inversion to rapidly and routinely estimate both the moment function and the centroid moment tensor (CMT) of an earthquake. In this method, waveform inversion is carried out in the frequency domain to obtain the moment function more rapidly than when solved in the time domain. We assume a pure double-couple source mechanism in order to stabilize the solution when using data from a small number of seismic stations. The fault and slip orientations are estimated by a grid search with respect to the strike, dip and rake angles. The moment function in the time domain is obtained from the inverse Fourier transform of the frequency components determined by the inversion. Since observed waveforms used for the inversion are limited in a particular frequency band, the estimated moment function is a bandpassed form. We develop a practical approach to estimate the deconvolved form of the moment function, from which we can reconstruct detailed rupture history and the seismic moment. The source location is determined by a spatial grid search using adaptive grid spacings, which are gradually decreased in each step of the search. We apply this method to two events that occurred in Indonesia by using data from a broad-band seismic network in Indonesia (JISNET): one northeast of Sulawesi (Mw = 7.5) on 2007 January 21, and the other south of Java (Mw = 7.5) on 2006 July 17. The source centroid locations and mechanisms we estimated for both events are consistent with those determined by the Global CMT Project and the National Earthquake Information Center of the U.S. Geological Survey. The estimated rupture duration of the Sulawesi event is 16 s, which is comparable to a typical duration for earthquakes of this magnitude, while that of the Java event is anomalously long (176 s), suggesting that this event was a tsunami earthquake. Our application demonstrates that this inversion method has great potential for rapid and routine estimations of both the CMT and the moment function, and may be useful for identification of tsunami earthquakes.

  6. The Frequency of Alcohol Use in Iranian Urban Population: The Results of a National Network Scale Up Survey.

    PubMed

    Nikfarjam, Ali; Hajimaghsoudi, Saiedeh; Rastegari, Azam; Haghdoost, Ali Akbar; Nasehi, Abbas Ali; Memaryan, Nadereh; Tarjoman, Terme; Baneshi, Mohammad Reza

    2016-08-17

    In Islamic countries alcohol consumption is considered as against religious values. Therefore, estimation of frequency of alcohol consumptions using direct methods is prone to different biases. In this study, we indirectly estimated the frequency of alcohol use in Iran, in network of a representative sample using network scale up (NSU) method. In a national survey, about 400 participants aged above 18 at each province, around 12 000 in total, were recruited. In a gender-match face to face interview, respondents were asked about the number of those who used alcohol (even one episode) in previous year in their active social network, classified by age and gender. The results were corrected for the level of visibility of alcohol consumption. The relative frequency of alcohol use at least once in previous year, among general population aged above 15, was estimated at 2.31% (95% CI: 2.12%, 2.53%). The relative frequency among males was about 8 times higher than females (4.13% versus 0.56%). The relative frequency among those aged 18 to 30 was 3 times higher than those aged above 30 (3.97% versus 1.36%). The relative frequency among male aged 18 to 30 was about 7%. It seems that the NSU is a feasible method to monitor the relative frequency of alcohol use in Iran, and possibly in countries with similar culture. Alcohol use was lower than non-Muslim countries, however, its relative frequency, in particular in young males, was noticeable. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  7. A New Method for Estimating the Effective Population Size from Allele Frequency Changes

    PubMed Central

    Pollak, Edward

    1983-01-01

    A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147

  8. Assessment of the apparent bending stiffness and damping of multilayer plates; modelling and experiment

    NASA Astrophysics Data System (ADS)

    Ege, Kerem; Roozen, N. B.; Leclère, Quentin; Rinaldi, Renaud G.

    2018-07-01

    In the context of aeronautics, automotive and construction applications, the design of light multilayer plates with optimized vibroacoustical damping and isolation performances remains a major industrial challenge and a hot topic of research. This paper focuses on the vibrational behavior of three-layered sandwich composite plates in a broad-band frequency range. Several aspects are studied through measurement techniques and analytical modelling of a steel/polymer/steel plate sandwich system. A contactless measurement of the velocity field of plates using a scanning laser vibrometer is performed, from which the equivalent single layer complex rigidity (apparent bending stiffness and apparent damping) in the mid/high frequency ranges is estimated. The results are combined with low/mid frequency estimations obtained with a high-resolution modal analysis method so that the frequency dependent equivalent Young's modulus and equivalent loss factor of the composite plate are identified for the whole [40 Hz-20 kHz] frequency band. The results are in very good agreement with an equivalent single layer analytical modelling based on wave propagation analysis (model of Guyader). The comparison with this model allows identifying the frequency dependent complex modulus of the polymer core layer through inverse resolution. Dynamical mechanical analysis measurements are also performed on the polymer layer alone and compared with the values obtained through the inverse method. Again, a good agreement between these two estimations over the broad-band frequency range demonstrates the validity of the approach.

  9. Sparse Spectro-Temporal Receptive Fields Based on Multi-Unit and High-Gamma Responses in Human Auditory Cortex

    PubMed Central

    Jenison, Rick L.; Reale, Richard A.; Armstrong, Amanda L.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.

    2015-01-01

    Spectro-Temporal Receptive Fields (STRFs) were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM). A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl’s gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl’s gyrus recordings elicited by click-train stimuli. PMID:26367010

  10. High-Frequency Subband Compressed Sensing MRI Using Quadruplet Sampling

    PubMed Central

    Sung, Kyunghyun; Hargreaves, Brian A

    2013-01-01

    Purpose To presents and validates a new method that formalizes a direct link between k-space and wavelet domains to apply separate undersampling and reconstruction for high- and low-spatial-frequency k-space data. Theory and Methods High- and low-spatial-frequency regions are defined in k-space based on the separation of wavelet subbands, and the conventional compressed sensing (CS) problem is transformed into one of localized k-space estimation. To better exploit wavelet-domain sparsity, CS can be used for high-spatial-frequency regions while parallel imaging can be used for low-spatial-frequency regions. Fourier undersampling is also customized to better accommodate each reconstruction method: random undersampling for CS and regular undersampling for parallel imaging. Results Examples using the proposed method demonstrate successful reconstruction of both low-spatial-frequency content and fine structures in high-resolution 3D breast imaging with a net acceleration of 11 to 12. Conclusion The proposed method improves the reconstruction accuracy of high-spatial-frequency signal content and avoids incoherent artifacts in low-spatial-frequency regions. This new formulation also reduces the reconstruction time due to the smaller problem size. PMID:23280540

  11. Model-based spectral estimation of Doppler signals using parallel genetic algorithms.

    PubMed

    Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F

    2000-05-01

    Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.

  12. Bayesian Non-Stationary Index Gauge Modeling of Gridded Precipitation Extremes

    NASA Astrophysics Data System (ADS)

    Verdin, A.; Bracken, C.; Caldwell, J.; Balaji, R.; Funk, C. C.

    2017-12-01

    We propose a Bayesian non-stationary model to generate watershed scale gridded estimates of extreme precipitation return levels. The Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset is used to obtain gridded seasonal precipitation extremes over the Taylor Park watershed in Colorado for the period 1981-2016. For each year, grid cells within the Taylor Park watershed are aggregated to a representative "index gauge," which is input to the model. Precipitation-frequency curves for the index gauge are estimated for each year, using climate variables with significant teleconnections as proxies. Such proxies enable short-term forecasting of extremes for the upcoming season. Disaggregation ratios of the index gauge to the grid cells within the watershed are computed for each year and preserved to translate the index gauge precipitation-frequency curve to gridded precipitation-frequency maps for select return periods. Gridded precipitation-frequency maps are of the same spatial resolution as CHIRPS (0.05° x 0.05°). We verify that the disaggregation method preserves spatial coherency of extremes in the Taylor Park watershed. Validation of the index gauge extreme precipitation-frequency method consists of ensuring extreme value statistics are preserved on a grid cell basis. To this end, a non-stationary extreme precipitation-frequency analysis is performed on each grid cell individually, and the resulting frequency curves are compared to those produced by the index gauge disaggregation method.

  13. ESTIMATING THE RADIUS OF THE CONVECTIVE CORE OF MAIN-SEQUENCE STARS FROM OBSERVED OSCILLATION FREQUENCIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Wuming, E-mail: yangwuming@bnu.edu.cn, E-mail: yangwuming@ynao.ac.cn

    The determination of the size of the convective core of main-sequence stars is usually dependent on the construction of models of stars. Here we introduce a method to estimate the radius of the convective core of main-sequence stars with masses between about 1.1 and 1.5 M {sub ⊙} from observed frequencies of low-degree p -modes. A formula is proposed to achieve the estimation. The values of the radius of the convective core of four known stars are successfully estimated by the formula. The radius of the convective core of KIC 9812850 estimated by the formula is 0.140 ± 0.028 Rmore » {sub ⊙}. In order to confirm this prediction, a grid of evolutionary models was computed. The value of the convective-core radius of the best-fit model of KIC 9812850 is 0.149 R {sub ⊙}, which is in good agreement with that estimated by the formula from observed frequencies. The formula aids in understanding the interior structure of stars directly from observed frequencies. The understanding is not dependent on the construction of models.« less

  14. Empirical Green's function analysis: Taking the next step

    USGS Publications Warehouse

    Hough, S.E.

    1997-01-01

    An extension of the empirical Green's function (EGF) method is presented that involves determination of source parameters using standard EGF deconvolution, followed by inversion for a common attenuation parameter for a set of colocated events. Recordings of three or more colocated events can thus be used to constrain a single path attenuation estimate. I apply this method to recordings from the 1995-1996 Ridgecrest, California, earthquake sequence; I analyze four clusters consisting of 13 total events with magnitudes between 2.6 and 4.9. I first obtain corner frequencies, which are used to infer Brune stress drop estimates. I obtain stress drop values of 0.3-53 MPa (with all but one between 0.3 and 11 MPa), with no resolved increase of stress drop with moment. With the corner frequencies constrained, the inferred attenuation parameters are very consistent; they imply an average shear wave quality factor of approximately 20-25 for alluvial sediments within the Indian Wells Valley. Although the resultant spectral fitting (using corner frequency and ??) is good, the residuals are consistent among the clusters analyzed. Their spectral shape is similar to the the theoretical one-dimensional response of a layered low-velocity structure in the valley (an absolute site response cannot be determined by this method, because of an ambiguity between absolute response and source spectral amplitudes). I show that even this subtle site response can significantly bias estimates of corner frequency and ??, if it is ignored in an inversion for only source and path effects. The multiple-EGF method presented in this paper is analogous to a joint inversion for source, path, and site effects; the use of colocated sets of earthquakes appears to offer significant advantages in improving resolution of all three estimates, especially if data are from a single site or sites with similar site response.

  15. Estimation of respiratory rhythm during night sleep using a bio-radar

    NASA Astrophysics Data System (ADS)

    Tataraidze, Alexander; Anishchenko, Lesya; Alekhin, Maksim; Korostovtseva, Lyudmila; Sviryaev, Yurii

    2014-05-01

    An assessment of bio-radiolocation monitoring of respiratory rhythm during sleep is given. Full-night respiratory inductance plethysmography (RIP) and bio-radiolocation (BRL) records were collected simultaneously in a sleep laboratory. Polysomnography data from 5 subjects without sleep breathing disorders were used. A multi-frequency bioradar with step frequency modulation was applied. It has 8 operating frequencies ranging from 3.6 to 4.0 GHz. BRL data are recorded in two quadratures. Respiratory cycles were detected in time domain. Obtained data was used for the evaluation of correlation between BRL and RIP respiration rate estimates. Strong correlation between corresponding time series was revealed. BRL method is reliably implemented for estimation of respiratory rhythm and respiratory rate variability during full night sleep.

  16. An eigenfunction method for reconstruction of large-scale and high-contrast objects.

    PubMed

    Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P

    2007-07-01

    A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.

  17. Coupled Riccati equations for complex plane constraint

    NASA Technical Reports Server (NTRS)

    Strong, Kristin M.; Sesak, John R.

    1991-01-01

    A new Linear Quadratic Gaussian design method is presented which provides prescribed imaginary axis pole placement for optimal control and estimation systems. This procedure contributes another degree of design freedom to flexible spacecraft control. Current design methods which interject modal damping into the system tend to have little affect on modal frequencies, i.e., they predictably shift open plant poles horizontally in the complex plane to form the closed loop controller or estimator pole constellation, but make little provision for vertical (imaginary axis) pole shifts. Imaginary axis shifts which reduce the closed loop model frequencies (the bandwidths) are desirable since they reduce the sensitivity of the system to noise disturbances. The new method drives the closed loop modal frequencies to predictable (specified) levels, frequencies as low as zero rad/sec (real axis pole placement) can be achieved. The design procedure works through rotational and translational destabilizations of the plant, and a coupling of two independently solved algebraic Riccati equations through a structured state weighting matrix. Two new concepts, gain transference and Q equivalency, are introduced and their use shown.

  18. Frequency domain analysis of errors in cross-correlations of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri

    2016-12-01

    We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method is used to account for temporal correlation of noise cross-spectrum at low frequencies (0.05-0.2 Hz) near the ocean microseismic peaks.

  19. Do regional methods really help reduce uncertainties in flood frequency analyses?

    NASA Astrophysics Data System (ADS)

    Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric

    2013-04-01

    Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged sites or estimated extremes at ungauged sites in the considered region, is an efficient way to reduce uncertainties in flood frequency studies.

  20. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  1. Radio frequency detection assembly and method for detecting radio frequencies

    DOEpatents

    Cown, Steven H.; Derr, Kurt Warren

    2010-03-16

    A radio frequency detection assembly is described and which includes a radio frequency detector which detects a radio frequency emission produced by a radio frequency emitter from a given location which is remote relative to the radio frequency detector; a location assembly electrically coupled with the radio frequency detector and which is operable to estimate the location of the radio frequency emitter from the radio frequency emission which has been received; and a radio frequency transmitter electrically coupled with the radio frequency detector and the location assembly, and which transmits a radio frequency signal which reports the presence of the radio frequency emitter.

  2. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    PubMed Central

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-01-01

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763

  3. Approaches to stream solute load estimation for solutes with varying dynamics from five diverse small watershed

    USGS Publications Warehouse

    Aulenbach, Brent T.; Burns, Douglas A.; Shanley, James B.; Yanai, Ruth D.; Bae, Kikang; Wild, Adam; Yang, Yang; Yi, Dong

    2016-01-01

    Estimating streamwater solute loads is a central objective of many water-quality monitoring and research studies, as loads are used to compare with atmospheric inputs, to infer biogeochemical processes, and to assess whether water quality is improving or degrading. In this study, we evaluate loads and associated errors to determine the best load estimation technique among three methods (a period-weighted approach, the regression-model method, and the composite method) based on a solute's concentration dynamics and sampling frequency. We evaluated a broad range of varying concentration dynamics with stream flow and season using four dissolved solutes (sulfate, silica, nitrate, and dissolved organic carbon) at five diverse small watersheds (Sleepers River Research Watershed, VT; Hubbard Brook Experimental Forest, NH; Biscuit Brook Watershed, NY; Panola Mountain Research Watershed, GA; and Río Mameyes Watershed, PR) with fairly high-frequency sampling during a 10- to 11-yr period. Data sets with three different sampling frequencies were derived from the full data set at each site (weekly plus storm/snowmelt events, weekly, and monthly) and errors in loads were assessed for the study period, annually, and monthly. For solutes that had a moderate to strong concentration–discharge relation, the composite method performed best, unless the autocorrelation of the model residuals was <0.2, in which case the regression-model method was most appropriate. For solutes that had a nonexistent or weak concentration–discharge relation (modelR2 < about 0.3), the period-weighted approach was most appropriate. The lowest errors in loads were achieved for solutes with the strongest concentration–discharge relations. Sample and regression model diagnostics could be used to approximate overall accuracies and annual precisions. For the period-weighed approach, errors were lower when the variance in concentrations was lower, the degree of autocorrelation in the concentrations was higher, and sampling frequency was higher. The period-weighted approach was most sensitive to sampling frequency. For the regression-model and composite methods, errors were lower when the variance in model residuals was lower. For the composite method, errors were lower when the autocorrelation in the residuals was higher. Guidelines to determine the best load estimation method based on solute concentration–discharge dynamics and diagnostics are presented, and should be applicable to other studies.

  4. A Transfer Voltage Simulation Method for Generator Step Up Transformers

    NASA Astrophysics Data System (ADS)

    Funabashi, Toshihisa; Sugimoto, Toshirou; Ueda, Toshiaki; Ametani, Akihiro

    It has been found from measurements for 13 sets of GSU transformers that a transfer voltage of a generator step-up (GSU) transformer involves one dominant oscillation frequency. The frequency can be estimated from the inductance and capacitance values of the GSU transformer low-voltage-side. This observation has led to a new method for simulating a GSU transformer transfer voltage. The method is based on the EMTP TRANSFORMER model, but stray capacitances are added. The leakage inductance and the magnetizing resistance are modified using approximate curves for their frequency characteristics determined from the measured results. The new method is validated in comparison with the measured results.

  5. Numerical simulation of Bragg scattering of sound by surface roughness for different values of the Rayleigh parameter

    NASA Astrophysics Data System (ADS)

    Salin, M. B.; Dosaev, A. S.; Konkov, A. I.; Salin, B. M.

    2014-07-01

    Numerical simulation methods are described for the spectral characteristics of an acoustic signal scattered by multiscale surface waves. The methods include the algorithms for calculating the scattered field by the Kirchhoff method and with the use of an integral equation, as well as the algorithms of surface waves generation with allowance for nonlinear hydrodynamic effects. The paper focuses on studying the spectrum of Bragg scattering caused by surface waves whose frequency exceeds the fundamental low-frequency component of the surface waves by several octaves. The spectrum broadening of the backscattered signal is estimated. The possibility of extending the range of applicability of the computing method developed under small perturbation conditions to cases characterized by a Rayleigh parameter of ≥1 is estimated.

  6. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  7. Conventional, Bayesian, and Modified Prony's methods for characterizing fast and slow waves in equine cancellous bone

    PubMed Central

    Groopman, Amber M.; Katz, Jonathan I.; Holland, Mark R.; Fujita, Fuminori; Matsukawa, Mami; Mizuno, Katsunori; Wear, Keith A.; Miller, James G.

    2015-01-01

    Conventional, Bayesian, and the modified least-squares Prony's plus curve-fitting (MLSP + CF) methods were applied to data acquired using 1 MHz center frequency, broadband transducers on a single equine cancellous bone specimen that was systematically shortened from 11.8 mm down to 0.5 mm for a total of 24 sample thicknesses. Due to overlapping fast and slow waves, conventional analysis methods were restricted to data from sample thicknesses ranging from 11.8 mm to 6.0 mm. In contrast, Bayesian and MLSP + CF methods successfully separated fast and slow waves and provided reliable estimates of the ultrasonic properties of fast and slow waves for sample thicknesses ranging from 11.8 mm down to 3.5 mm. Comparisons of the three methods were carried out for phase velocity at the center frequency and the slope of the attenuation coefficient for the fast and slow waves. Good agreement among the three methods was also observed for average signal loss at the center frequency. The Bayesian and MLSP + CF approaches were able to separate the fast and slow waves and provide good estimates of the fast and slow wave properties even when the two wave modes overlapped in both time and frequency domains making conventional analysis methods unreliable. PMID:26328678

  8. Methods for estimating the magnitude and frequency of peak streamflows for unregulated streams in Oklahoma

    USGS Publications Warehouse

    Lewis, Jason M.

    2010-01-01

    Peak-streamflow regression equations were determined for estimating flows with exceedance probabilities from 50 to 0.2 percent for the state of Oklahoma. These regression equations incorporate basin characteristics to estimate peak-streamflow magnitude and frequency throughout the state by use of a generalized least squares regression analysis. The most statistically significant independent variables required to estimate peak-streamflow magnitude and frequency for unregulated streams in Oklahoma are contributing drainage area, mean-annual precipitation, and main-channel slope. The regression equations are applicable for watershed basins with drainage areas less than 2,510 square miles that are not affected by regulation. The resulting regression equations had a standard model error ranging from 31 to 46 percent. Annual-maximum peak flows observed at 231 streamflow-gaging stations through water year 2008 were used for the regression analysis. Gage peak-streamflow estimates were used from previous work unless 2008 gaging-station data were available, in which new peak-streamflow estimates were calculated. The U.S. Geological Survey StreamStats web application was used to obtain the independent variables required for the peak-streamflow regression equations. Limitations on the use of the regression equations and the reliability of regression estimates for natural unregulated streams are described. Log-Pearson Type III analysis information, basin and climate characteristics, and the peak-streamflow frequency estimates for the 231 gaging stations in and near Oklahoma are listed. Methodologies are presented to estimate peak streamflows at ungaged sites by using estimates from gaging stations on unregulated streams. For ungaged sites on urban streams and streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow magnitude and frequency.

  9. Fundamental frequency estimation of singing voice

    NASA Astrophysics Data System (ADS)

    de Cheveigné, Alain; Henrich, Nathalie

    2002-05-01

    A method of fundamental frequency (F0) estimation recently developped for speech [de Cheveigné and Kawahara, J. Acoust. Soc. Am. (to be published)] was applied to singing voice. An electroglottograph signal recorded together with the microphone provided a reference by which estimates could be validated. Using standard parameter settings as for speech, error rates were low despite the wide range of F0s (about 100 to 1600 Hz). Most ``errors'' were due to irregular vibration of the vocal folds, a sharp formant resonance that reduced the waveform to a single harmonic, or fast F0 changes such as in high-amplitude vibrato. Our database (18 singers from baritone to soprano) included examples of diphonic singing for which melody is carried by variations of the frequency of a narrow formant rather than F0. Varying a parameter (ratio of inharmonic to total power) the algorithm could be tuned to follow either frequency. Although the method has not been formally tested on a wide range of instruments, it seems appropriate for musical applications because it is accurate, accepts a wide range of F0s, and can be implemented with low latency for interactive applications. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.

  10. The estimation of tree posterior probabilities using conditional clade probability distributions.

    PubMed

    Larget, Bret

    2013-07-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.

  11. Evaluation of multiple frequency bioelectrical impedance and Cole-Cole analysis for the assessment of body water volumes in healthy humans.

    PubMed

    Cornish, B H; Ward, L C; Thomas, B J; Jebb, S A; Elia, M

    1996-03-01

    To assess the application of a Cole-Cole analysis of multiple frequency bioelectrical impedance analysis (MFBIA) measurements to predict total body water (TBW) and extracellular water (ECW) in humans. This technique has previously been shown to produce accurate and reliable estimates in both normal and abnormal animals. The whole body impedance of 60 healthy humans was measured at 496 frequencies (ranging from 4 kHz to 1 MHz) and the impedance at zero frequency, Ro, and at the characteristic frequency, Zc, were determined from the impedance spectrum, (Cole-Cole plot). TBW and ECW were independently determined using deuterium and bromide tracer dilution techniques. At the Dunn Clinical Nutrition Centre and The Department of Biochemistry, University of Queensland. 60 healthy adult volunteers (27 men and 33 women, aged 18-45 years). The results presented suggest that the swept frequency bioimpedance technique estimates total body water, (SEE = 5.2%), and extracellular water, (SEE = 10%), only slightly better in normal, healthy subjects than a method based on single frequency bioimpedance or anthropometric estimates based on weight, height and gender. This study has undertaken the most extensive analysis to date of relationships between TBW (and ECW) and individual impedances obtained at different frequencies ( > 400 frequencies), and has shown marginal advantages of using one frequency over another, even if values predicted from theoretical bioimpedance models are used in the estimations. However in situations where there are disturbances of fluid distribution, values predicted from the Cole-Cole analysis of swept frequency bioimpedance measurements could prove to be more useful.

  12. 3D Tendon Strain Estimation Using High-frequency Volumetric Ultrasound Images: A Feasibility Study.

    PubMed

    Carvalho, Catarina; Slagmolen, Pieter; Bogaerts, Stijn; Scheys, Lennart; D'hooge, Jan; Peers, Koen; Maes, Frederik; Suetens, Paul

    2018-03-01

    Estimation of strain in tendons for tendinopathy assessment is a hot topic within the sports medicine community. It is believed that, if accurately estimated, existing treatment and rehabilitation protocols can be improved and presymptomatic abnormalities can be detected earlier. State-of-the-art studies present inaccurate and highly variable strain estimates, leaving this problem without solution. Out-of-plane motion, present when acquiring two-dimensional (2D) ultrasound (US) images, is a known problem and may be responsible for such errors. This work investigates the benefit of high-frequency, three-dimensional (3D) US imaging to reduce errors in tendon strain estimation. Volumetric US images were acquired in silico, in vitro, and ex vivo using an innovative acquisition approach that combines the acquisition of 2D high-frequency US images with a mechanical guided system. An affine image registration method was used to estimate global strain. 3D strain estimates were then compared with ground-truth values and with 2D strain estimates. The obtained results for in silico data showed a mean absolute error (MAE) of 0.07%, 0.05%, and 0.27% for 3D estimates along axial, lateral direction, and elevation direction and a respective MAE of 0.21% and 0.29% for 2D strain estimates. Although 3D could outperform 2D, this does not occur in in vitro and ex vivo settings, likely due to 3D acquisition artifacts. Comparison against the state-of-the-art methods showed competitive results. The proposed work shows that 3D strain estimates are more accurate than 2D estimates but acquisition of appropriate 3D US images remains a challenge.

  13. Infrared fix pattern noise reduction method based on Shearlet Transform

    NASA Astrophysics Data System (ADS)

    Rong, Shenghui; Zhou, Huixin; Zhao, Dong; Cheng, Kuanhong; Qian, Kun; Qin, Hanlin

    2018-06-01

    The non-uniformity correction (NUC) is an effective way to reduce fix pattern noise (FPN) and improve infrared image quality. The temporal high-pass NUC method is a kind of practical NUC method because of its simple implementation. However, traditional temporal high-pass NUC methods rely deeply on the scene motion and suffer image ghosting and blurring. Thus, this paper proposes an improved NUC method based on Shearlet Transform (ST). First, the raw infrared image is decomposed into multiscale and multi-orientation subbands by ST and the FPN component mainly exists in some certain high-frequency subbands. Then, high-frequency subbands are processed by the temporal filter to extract the FPN due to its low-frequency characteristics. Besides, each subband has a confidence parameter to determine the degree of FPN, which is estimated by the variance of subbands adaptively. At last, the process of NUC is achieved by subtracting the estimated FPN component from the original subbands and the corrected infrared image can be obtained by the inverse ST. The performance of the proposed method is evaluated with real and synthetic infrared image sequences thoroughly. Experimental results indicate that the proposed method can reduce heavily FPN with less roughness and RMSE.

  14. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  15. Alternative Regression Equations for Estimation of Annual Peak-Streamflow Frequency for Undeveloped Watersheds in Texas using PRESS Minimization

    USGS Publications Warehouse

    Asquith, William H.; Thompson, David B.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.

  16. Self-Tuning Adaptive-Controller Using Online Frequency Identification

    NASA Technical Reports Server (NTRS)

    Chiang, W. W.; Cannon, R. H., Jr.

    1985-01-01

    A real time adaptive controller was designed and tested successfully on a fourth order laboratory dynamic system which features very low structural damping and a noncolocated actuator sensor pair. The controller, implemented in a digital minicomputer, consists of a state estimator, a set of state feedback gains, and a frequency locked loop (FLL) for real time parameter identification. The FLL can detect the closed loop natural frequency of the system being controlled, calculate the mismatch between a plant parameter and its counterpart in the state estimator, and correct the estimator parameter in real time. The adaptation algorithm can correct the controller error and stabilize the system for more than 50% variation in the plant natural frequency, compared with a 10% stability margin in frequency variation for a fixed gain controller having the same performance at the nominal plant condition. After it has locked to the correct plant frequency, the adaptive controller works as well as the fixed gain controller does when there is no parameter mismatch. The very rapid convergence of this adaptive system is demonstrated experimentally, and can also be proven with simple root locus methods.

  17. The Measurement of Term Importance in Automatic Indexing.

    ERIC Educational Resources Information Center

    Salton, G.; And Others

    1981-01-01

    Reviews major term-weighting theories, presents methods for estimating the relevance properties of terms based on their frequency characteristics in a document collection, and compares weighting systems using term relevance properties with more conventional frequency-based methodologies. Eighteen references are cited. (Author/FM)

  18. OPTIMIZING MINIRHIZOTRON SAMPLE FREQUENCY FOR AN EVERGREEN AND DECIDIOUS TREE SPECIES

    EPA Science Inventory

    Increasingly minirhizotrons are being used in natural ecosystems to determine fine root production and turnover, as they provide a nondestructive, in situ method for studying fine root dynamics. Our objective is to determine how image collection frequency influences estimates of ...

  19. High-Precision Attitude Estimation Method of Star Sensors and Gyro Based on Complementary Filter and Unscented Kalman Filter

    NASA Astrophysics Data System (ADS)

    Guo, C.; Tong, X.; Liu, S.; Liu, S.; Lu, X.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Determining the attitude of satellite at the time of imaging then establishing the mathematical relationship between image points and ground points is essential in high-resolution remote sensing image mapping. Star tracker is insensitive to the high frequency attitude variation due to the measure noise and satellite jitter, but the low frequency attitude motion can be determined with high accuracy. Gyro, as a short-term reference to the satellite's attitude, is sensitive to high frequency attitude change, but due to the existence of gyro drift and integral error, the attitude determination error increases with time. Based on the opposite noise frequency characteristics of two kinds of attitude sensors, this paper proposes an on-orbit attitude estimation method of star sensors and gyro based on Complementary Filter (CF) and Unscented Kalman Filter (UKF). In this study, the principle and implementation of the proposed method are described. First, gyro attitude quaternions are acquired based on the attitude kinematics equation. An attitude information fusion method is then introduced, which applies high-pass filtering and low-pass filtering to the gyro and star tracker, respectively. Second, the attitude fusion data based on CF are introduced as the observed values of UKF system in the process of measurement updating. The accuracy and effectiveness of the method are validated based on the simulated sensors attitude data. The obtained results indicate that the proposed method can suppress the gyro drift and measure noise of attitude sensors, improving the accuracy of the attitude determination significantly, comparing with the simulated on-orbit attitude and the attitude estimation results of the UKF defined by the same simulation parameters.

  20. Adaptive Modal Identification for Flutter Suppression Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.

    2016-01-01

    In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.

  1. Accounting for the Effect of Earth's Rotation in Magnetotelluric Inference

    NASA Astrophysics Data System (ADS)

    Riegert, D. L.; Thomson, D. J.

    2017-12-01

    The study of geomagnetism has been documented as far back as 1722 when the watchmaker G. Graham constructed a more sensitive compass and showed that the variations in geomagnetic direction varied with an irregular daily pattern. Increased interest in geomagnetism in geomagnetism began at the end of the 19th century (Lamb, Schuster, Chapman, and Price). The Magnetotelluric Method was first introduced in the 1950's (Cagniard and Tikhonov), and, at its core, is simply a regression problem. The result of this method is a transfer function estimate which describes the earth's response to magnetic field variations. This estimate can then be used to infer the earth's subsurface structure; useful for applications such as natural resource exploration. The statistical problem of estimating a transfer function between geomagnetic and induced current measurements has evolved since the 1950's due to a variety of problems: non-stationarity, outliers, and violation of Gaussian assumptions. To address some of these issues, robust regression methods (Chave and Thomson, 2004) and the remote reference method (Gambel, 1979) have been proposed and used. The current method seems to provide reasonable estimates, but still requires a large amount of data. Using the multitaper method of spectral analysis (Thomson, 1982), taking long (greater than 4 months) blocks of geomagnetic data, and concentrating on frequencies below 1000 microhertz to avoid ultraviolet effects, one finds that:1) the cross-spectra are dominated by many offset frequencies including plus and minus 1 and 2 cycles per day;2) the coherence at these offset frequencies is often stronger than at zero offset;3) there are strong couplings from the "quasi two-day" cycle;4) frequencines are usually not symmetric;5) the spectra are dominated by the normal modes of the Sun. This talk will discuss the method of incorporating these observations into the transfer function estimation model, some of the difficulties that arose, their solutions, and current results.

  2. Annual peak discharges from small drainage areas in Montana through September 1976

    USGS Publications Warehouse

    Johnson, M.V.; Omang, R.J.; Hull, J.A.

    1977-01-01

    Annual peak discharge from small drainage areas is tabulated for 336 sites in Montana. The 1976 additions included data collected at 206 sites. The program which investigates the magnitude and frequency of floods from small drainage areas in Montana, was begun July 1, 1955. Originally 45 crest-stage gaging stations were established. The purpose of the program is to collect sufficient peak-flow data, which through analysis could provide methods for estimating the magnitude and frequency of floods at any point in Montana. The ultimate objective is to provide methods for estimating the 100-year flood with the reliability needed for road design. (Woodard-USGS)

  3. Estimation of pseudo-2D shear-velocity section by inversion of high frequency surface waves

    USGS Publications Warehouse

    Luo, Y.; Liu, J.; Xia, J.; Xu, Y.; Liu, Q.

    2006-01-01

    A scheme to generate pseudo-2D shear-velocity sections with high horizontal resolution and low field cost by inversion of high frequency surface waves is presented. It contains six steps. The key step is the joint method of crossed correlation and phase shift scanning. This joint method chooses only two traces to generate image of dispersion curve. For Rayleigh-wave dispersion is most important for estimation of near-surface shear-wave velocity, it can effectively obtain reliable images of dispersion curves with a couple of traces. The result of a synthetic example shows the feasibility of this scheme. ?? 2005 Society of Exploration Geophysicists.

  4. Methods for estimating magnitude and frequency of floods in Montana based on data through 1983

    USGS Publications Warehouse

    Omang, R.J.; Parrett, Charles; Hull, J.A.

    1986-01-01

    Equations are presented for estimating flood magnitudes for ungaged sites in Montana based on data through 1983. The State was divided into eight regions based on hydrologic conditions, and separate multiple regression equations were developed for each region. These equations relate annual flood magnitudes and frequencies to basin characteristics and are applicable only to natural flow streams. In three of the regions, equations also were developed relating flood magnitudes and frequencies to basin characteristics and channel geometry measurements. The standard errors of estimate for an exceedance probability of 1% ranged from 39% to 87%. Techniques are described for estimating annual flood magnitude and flood frequency information at ungaged sites based on data from gaged sites on the same stream. Included are curves relating flood frequency information to drainage area for eight major streams in the State. Maximum known flood magnitudes in Montana are compared with estimated 1 %-chance flood magnitudes and with maximum known floods in the United States. Values of flood magnitudes for selected exceedance probabilities and values of significant basin characteristics and channel geometry measurements for all gaging stations used in the analysis are tabulated. Included are 375 stations in Montana and 28 nearby stations in Canada and adjoining States. (Author 's abstract)

  5. Building of an Experimental Cline With Arabidopsis thaliana to Estimate Herbicide Fitness Cost

    PubMed Central

    Roux, Fabrice; Giancola, Sandra; Durand, Stéphanie; Reboud, Xavier

    2006-01-01

    Various management strategies aim at maintaining pesticide resistance frequency under a threshold value by taking advantage of the benefit of the fitness penalty (the cost) expressed by the resistance allele outside the treated area or during the pesticide selection “off years.” One method to estimate a fitness cost is to analyze the resistance allele frequency along transects across treated and untreated areas. On the basis of the shape of the cline, this method gives the relative contributions of both gene flow and the fitness difference between genotypes in the treated and untreated areas. Taking advantage of the properties of such migration–selection balance, an artificial cline was built up to optimize the conditions where the fitness cost of two herbicide-resistant mutants (acetolactate synthase and auxin-induced target genes) in the model species Arabidopsis thaliana could be more accurately measured. The analysis of the microevolutionary dynamics in these experimental populations indicated mean fitness costs of ∼15 and 92% for the csr1-1 and axr2-1 resistances, respectively. In addition, negative frequency dependence for the fitness cost was also detected for the axr2-1 resistance. The advantages and disadvantages of the cline approach are discussed in regard to other methods of cost estimation. This comparison highlights the powerful ability of an experimental cline to measure low fitness costs and detect sensibility to frequency-dependent variations. PMID:16582450

  6. The identification of solar wind waves at discrete frequencies and the role of the spectral analysis techniques

    NASA Astrophysics Data System (ADS)

    Di Matteo, S.; Villante, U.

    2017-05-01

    The occurrence of waves at discrete frequencies in the solar wind (SW) parameters has been reported in the scientific literature with some controversial results, mostly concerning the existence (and stability) of favored sets of frequencies. On the other hand, the experimental results might be influenced by the analytical methods adopted for the spectral analysis. We focused attention on the fluctuations of the SW dynamic pressure (PSW) occurring in the leading edges of streams following interplanetary shocks and compared the results of the Welch method (WM) with those of the multitaper method (MTM). The results of a simulation analysis demonstrate that the identification of the wave occurrence and the frequency estimate might be strongly influenced by the signal characteristics and analytical methods, especially in the presence of multicomponent signals. In SW streams, PSW oscillations are routinely detected in the entire range f ≈ 1.2-5.0 mHz; nevertheless, the WM/MTM agreement in the identification and frequency estimate occurs in ≈50% of events and different sets of favored frequencies would be proposed for the same set of events by the WM and MTM analysis. The histogram of the frequency distribution of the events identified by both methods suggests more relevant percentages between f ≈ 1.7-1.9, f ≈ 2.7-3.4, and f ≈ 3.9-4.4 (with a most relevant peak at f ≈ 4.2 mHz). Extremely severe thresholds select a small number (14) of remarkable events, with a one-to-one correspondence between WM and MTM: interestingly, these events reveal a tendency for a favored occurrence in bins centered at f ≈ 2.9 and at f ≈ 4.2 mHz.

  7. Gene Flow and the Measurement of Dispersal in Plant Populations.

    ERIC Educational Resources Information Center

    Nicholls, Marc S.

    1986-01-01

    Reviews methods of estimating pollen and seed dispersals and discusses the extent and frequency of gene exchange within and between populations. Offers suggestions for designing exercises suitable for estimating dispersal distances in natural plant populations. (ML)

  8. Study on the description method of upper limb's muscle force levels during simulated in-orbit operations

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Li, DongXu; Liu, ZhiZhen; Liu, Liang

    2013-03-01

    The dexterous upper limb serves as the most important tool for astronauts to implement in-orbit experiments and operations. This study developed a simulated weightlessness experiment and invented new measuring equipment to quantitatively evaluate the muscle ability of the upper limb. Isometric maximum voluntary contractions (MVCs) and surface electromyography (sEMG) signals of right-handed pushing at the three positions were measured for eleven subjects. In order to enhance the comprehensiveness and accuracy of muscle force assessment, the study focused on signal processing techniques. We applied a combination method, which consists of time-, frequency-, and bi-frequency-domain analyses. Time- and frequency-domain analyses estimated the root mean square (RMS) and median frequency (MDF) of sEMG signals, respectively. Higher order spectra (HOS) of bi-frequency domain evaluated the maximum bispectrum amplitude ( B max), Gaussianity level (Sg) and linearity level (S l ) of sEMG signals. Results showed that B max, S l , and RMS values all increased as force increased. MDF and Sg values both declined as force increased. The research demonstrated that the combination method is superior to the conventional time- and frequency-domain analyses. The method not only described sEMG signal amplitude and power spectrum, but also deeper characterized phase coupling information and non-Gaussianity and non-linearity levels of sEMG, compared to two conventional analyses. The finding from the study can aid ergonomist to estimate astronaut muscle performance, so as to optimize in-orbit operation efficacy and minimize musculoskeletal injuries.

  9. Low-flow characteristics of Indiana streams

    USGS Publications Warehouse

    Stewart, J.A.

    1983-01-01

    Knowledge of low-flow data for Indiana streams is essential to the planners and developers of water resources for municipal, industrial, and recreational uses in the State. Low-flow data for 219 continuous-record gaging stations through the 1978 water year and for some stations since then are presented in tables and curves. Flow-duration and low-flow-frequency data were estimated or determined for continuous-record stations having more than 10 years of record. In addition, low-flow-frequency data were estimated for 248 partial-record stations. Methods for estimating these data are included in the report. (USGS)

  10. Research on Seismic Wave Attenuation in Gas Hydrates Layer Using Vertical Cable Seismic Data

    NASA Astrophysics Data System (ADS)

    Wang, Xiangchun; Liang, Lunhang; Wu, Zhongliang

    2018-06-01

    Vertical cable seismic (VCS) data are the most suitable seismic data for estimating the quality factor Q values of layers under the sea bottom by now. Here the quality factor Q values are estimated using the high-precision logarithmic spectrum ratio method for VCS data. The estimated Q values are applied to identify the layers with gas hydrates and free gas. From the results it can be seen that the Q value in layer with gas hydrates becomes larger and the Q value in layer with free gas becomes smaller than layers without gas hydrates or free gas. Additionally, the estimated Q values are used for inverse Q filtering processing to compensate the attenuated seismic signal's high-frequency component. From the results it can be seen that the main frequency of seismic signal is improved and the frequency band is broadened, the resolution of the VCS data is improved effectively.

  11. Methods for Estimating Magnitude and Frequency of Floods in Rural Basins in the Southeastern United States: South Carolina

    USGS Publications Warehouse

    Feaster, Toby D.; Gotvald, Anthony J.; Weaver, J. Curtis

    2009-01-01

    For more than 50 years, the U.S. Geological Survey (USGS) has been developing regional regression equations that can be used to estimate flood magnitude and frequency at ungaged sites. Flood magnitude relates to the volume of flow that occurs over some period of time and usually is presented in cubic feet per second. Flood frequency relates to the probability of occurrence of a flood; that is, on average, what is the likelihood that a flood with a specified magnitude will occur in any given year (1 percent chance, 10 percent chance, 50 percent chance, and so on). Such flood estimates are needed for the efficient design of bridges, highway embankments, levees, and other structures near streams. In addition, these estimates are needed for the effective planning and management of land and water resources, to protect lives and property in flood-prone areas, and to determine flood-insurance rates.

  12. Covariance-based direction-of-arrival estimation of wideband coherent chirp signals via sparse representation.

    PubMed

    Sha, Zhichao; Liu, Zhengmeng; Huang, Zhitao; Zhou, Yiyu

    2013-08-29

    This paper addresses the problem of direction-of-arrival (DOA) estimation of multiple wideband coherent chirp signals, and a new method is proposed. The new method is based on signal component analysis of the array output covariance, instead of the complicated time-frequency analysis used in previous literatures, and thus is more compact and effectively avoids possible signal energy loss during the hyper-processes. Moreover, the a priori information of signal number is no longer a necessity for DOA estimation in the new method. Simulation results demonstrate the performance superiority of the new method over previous ones.

  13. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  14. A new frequency approach for light flicker evaluation in electric power systems

    NASA Astrophysics Data System (ADS)

    Feola, Luigi; Langella, Roberto; Testa, Alfredo

    2015-12-01

    In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.

  15. System and method for motor speed estimation of an electric motor

    DOEpatents

    Lu, Bin [Kenosha, WI; Yan, Ting [Brookfield, WI; Luebke, Charles John [Sussex, WI; Sharma, Santosh Kumar [Viman Nagar, IN

    2012-06-19

    A system and method for a motor management system includes a computer readable storage medium and a processing unit. The processing unit configured to determine a voltage value of a voltage input to an alternating current (AC) motor, determine a frequency value of at least one of a voltage input and a current input to the AC motor, determine a load value from the AC motor, and access a set of motor nameplate data, where the set of motor nameplate data includes a rated power, a rated speed, a rated frequency, and a rated voltage of the AC motor. The processing unit is also configured to estimate a motor speed based on the voltage value, the frequency value, the load value, and the set of nameplate data and also store the motor speed on the computer readable storage medium.

  16. A Review of System Identification Methods Applied to Aircraft

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1983-01-01

    Airplane identification, equation error method, maximum likelihood method, parameter estimation in frequency domain, extended Kalman filter, aircraft equations of motion, aerodynamic model equations, criteria for the selection of a parsimonious model, and online aircraft identification are addressed.

  17. Guidelines for determining flood flow frequency—Bulletin 17C

    USGS Publications Warehouse

    England, John F.; Cohn, Timothy A.; Faber, Beth A.; Stedinger, Jery R.; Thomas, Wilbert O.; Veilleux, Andrea G.; Kiang, Julie E.; Mason, Robert R.

    2018-03-29

    Accurate estimates of flood frequency and magnitude are a key component of any effective nationwide flood risk management and flood damage abatement program. In addition to accuracy, methods for estimating flood risk must be uniformly and consistently applied because management of the Nation’s water and related land resources is a collaborative effort involving multiple actors including most levels of government and the private sector.Flood frequency guidelines have been published in the United States since 1967, and have undergone periodic revisions. In 1967, the U.S. Water Resources Council presented a coherent approach to flood frequency with Bulletin 15, “A Uniform Technique for Determining Flood Flow Frequencies.” The method it recommended involved fitting the log-Pearson Type III distribution to annual peak flow data by the method of moments.The first extension and update of Bulletin 15 was published in 1976 as Bulletin 17, “Guidelines for Determining Flood Flow Frequency” (Guidelines). It extended the Bulletin 15 procedures by introducing methods for dealing with outliers, historical flood information, and regional skew. Bulletin 17A was published the following year to clarify the computation of weighted skew. The next revision of the Bulletin, the Bulletin 17B, provided a host of improvements and new techniques designed to address situations that often arise in practice, including better methods for estimating and using regional skew, weighting station and regional skew, detection of outliers, and use of the conditional probability adjustment.The current version of these Guidelines are presented in this document, denoted Bulletin 17C. It incorporates changes motivated by four of the items listed as “Future Work” in Bulletin 17B and 30 years of post-17B research on flood processes and statistical methods. The updates include: adoption of a generalized representation of flood data that allows for interval and censored data types; a new method, called the Expected Moments Algorithm, which extends the method of moments so that it can accommodate interval data; a generalized approach to identification of low outliers in flood data; and an improved method for computing confidence intervals.Federal agencies are requested to use these Guidelines in all planning activities involving water and related land resources. State, local, and private organizations are encouraged to use these Guidelines to assure uniformity in the flood frequency estimates that all agencies concerned with flood risk should use for Federal planning decisions.This revision is adopted with the knowledge and understanding that review of these procedures will be ongoing. Updated methods will be adopted when warranted by experience and by examination and testing of new techniques.

  18. GONe: Software for estimating effective population size in species with generational overlap

    USGS Publications Warehouse

    Coombs, J.A.; Letcher, B.H.; Nislow, K.H.

    2012-01-01

    GONe is a user-friendly, Windows-based program for estimating effective size (N e) in populations with overlapping generations. It uses the Jorde-Ryman modification to the temporal method to account for age structure in populations. This method requires estimates of age-specific survival and birth rate and allele frequencies measured in two or more consecutive cohorts. Allele frequencies are acquired by reading in genotypic data from files formatted for either GENEPOP or TEMPOFS. For each interval between consecutive cohorts, N e is estimated at each locus and over all loci. Furthermore, N e estimates are output for three different genetic drift estimators (F s, F c and F k). Confidence intervals are derived from a chi-square distribution with degrees of freedom equal to the number of independent alleles. GONe has been validated over a wide range of N e values, and for scenarios where survival and birth rates differ between sexes, sex ratios are unequal and reproductive variances differ. GONe is freely available for download at. ?? 2011 Blackwell Publishing Ltd.

  19. Sampling western spruce budworm larvae by frequency of occurrence on lower crown branches.

    Treesearch

    R.R. Mason; R.C. Beckwith

    1990-01-01

    A sampling method was derived whereby budworm density can be estimated by the frequency of occurrence of larvae over a given threshold number instead of by direct counts on branch samples. The model used for converting frequencies to mean densities is appropriate for nonrandom as well as random distributions and, therefore, is applicable to all population densities of...

  20. A novel cost-effective parallel narrowband ANC system with local secondary-path estimation

    NASA Astrophysics Data System (ADS)

    Delegà, Riccardo; Bernasconi, Giancarlo; Piroddi, Luigi

    2017-08-01

    Many noise reduction applications are targeted at multi-tonal disturbances. Active noise control (ANC) solutions for such problems are generally based on the combination of multiple adaptive notch filters. Both the performance and the computational cost are negatively affected by an increase in the number of controlled frequencies. In this work we study a different modeling approach for the secondary path, based on the estimation of various small local models in adjacent frequency subbands, that greatly reduces the impact of reference-filtering operations in the ANC algorithm. Furthermore, in combination with a frequency-specific step size tuning method it provides a balanced attenuation performance over the whole controlled frequency range (and particularly in the high end of the range). Finally, the use of small local models is greatly beneficial for the reactivity of the online secondary path modeling algorithm when the characteristics of the acoustic channels are time-varying. Several simulations are provided to illustrate the positive features of the proposed method compared to other well-known techniques.

  1. A Long-Term Comparison of GPS Carrierphase Frequency Transfer and Two-Way Satellite Time/Frequency Transfer

    DTIC Science & Technology

    2007-01-01

    and frequency transfer ( TWSTFT ) were performed along three transatlantic links over the 6-month period 29 January – 31 July 2006. The GPSCPFT and... TWSTFT results were subtracted in order to estimate the combined uncertainty of the methods. The frequency values obtained from GPSCPFT and TWSTFT ...values were equal to or less than the frequency-stability values σy(GPSCPFT) – y( TWSTFT ) (τ) (or TheoBR (τ)) computed for the corresponding averaging

  2. Methods for estimating magnitude and frequency of peak flows for small watersheds in Utah.

    DOT National Transportation Integrated Search

    2010-06-01

    Determining discharge in a stream is important to the design of culverts, bridges, and other structures pertaining to : transportation systems. Currently in Utah regression equations exist to estimate recurrence flood year discharges for : rural wate...

  3. Experimental measurements of lung resonant frequencies in a bottlenose dolphin (Tursiops truncatus) and white whale (Delphinapterus leucas)

    NASA Astrophysics Data System (ADS)

    Finneran, James J.

    2003-04-01

    An acoustic backscatter technique was used to estimate in vivo whole-lung resonant frequencies in a bottlenose dolphin (Tursiops truncatus) and a white whale (Delphinapterus leucas). Subjects were trained to submerge and position themselves near an underwater sound projector and a receiving hydrophone. Acoustic pressure measurements were made near the subjects' lungs while insonified with pure tones at frequencies from 16 to 100 Hz. Whole-lung resonant frequencies were estimated by comparing pressures measured near the subjects' lungs to those measured from the same location without the subject present. Experimentally measured resonant frequencies and damping ratios were much higher than those predicted using equivalent volume spherical air bubble models. The experimental technique, data analysis method, and discrepancy between the observed and predicted values will be discussed. The potential effects of depth on the resonance frequencies will also be discussed.

  4. The excitation and characteristic frequency of the long-period volcanic event: An approach based on an inhomogeneous autoregressive model of a linear dynamic system

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Kumazawa, M.; Yamaoka, K.; Chouet, B.A.

    1998-01-01

    We present a method to quantify the source excitation function and characteristic frequencies of long-period volcanic events. The method is based on an inhomogeneous autoregressive (AR) model of a linear dynamic system, in which the excitation is assumed to be a time-localized function applied at the beginning of the event. The tail of an exponentially decaying harmonic waveform is used to determine the characteristic complex frequencies of the event by the Sompi method. The excitation function is then derived by operating an AR filter constructed from the characteristic frequencies to the entire seismogram of the event, including the inhomogeneous part of the signal. We apply this method to three long-period events at Kusatsu-Shirane Volcano, central Japan, whose waveforms display simple decaying monochromatic oscillations except for the beginning of the events. We recover time-localized excitation functions lasting roughly 1 s at the start of each event and find that the estimated functions are very similar to each other at all the stations of the seismic network for each event. The phases of the characteristic oscillations referred to the estimated excitation function fall within a narrow range for almost all the stations. These results strongly suggest that the excitation and mode of oscillation are both dominated by volumetric change components. Each excitation function starts with a pronounced dilatation consistent with a sudden deflation of the volumetric source which may be interpreted in terms of a choked-flow transport mechanism. The frequency and Q of the characteristic oscillation both display a temporal evolution from event to event. Assuming a crack filled with bubbly water as seismic source for these events, we apply the Van Wijngaarden-Papanicolaou model to estimate the acoustic properties of the bubbly liquid and find that the observed changes in the frequencies and Q are consistently explained by a temporal change in the radii of the bubbles characterizing the bubbly water in the crack.

  5. Monopole and dipole estimation for multi-frequency sky maps by linear regression

    NASA Astrophysics Data System (ADS)

    Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W.

    2017-01-01

    We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky.

  6. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    NASA Astrophysics Data System (ADS)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.

  7. Analysis of Cellular DNA Content by Flow Cytometry.

    PubMed

    Darzynkiewicz, Zbigniew; Huang, Xuan; Zhao, Hong

    2017-10-02

    Cellular DNA content can be measured by flow cytometry with the aim of : (1) revealing cell distribution within the major phases of the cell cycle, (2) estimating frequency of apoptotic cells with fractional DNA content, and/or (3) disclosing DNA ploidy of the measured cell population. In this unit, simple and universally applicable methods for staining fixed cells are presented, as are methods that utilize detergents and/or proteolytic treatment to permeabilize cells and make DNA accessible to fluorochrome. Additionally, supravital cell staining with Hoechst 33342, which is primarily used for sorting live cells based on DNA-content differences for their subsequent culturing, is described. Also presented are methods for staining cell nuclei isolated from paraffin-embedded tissues. Available algorithms are listed for deconvolution of DNA-content-frequency histograms to estimate percentage of cells in major phases of the cell cycle and frequency of apoptotic cells with fractional DNA content. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley and Sons, Inc.

  8. Analysis of Cellular DNA Content by Flow Cytometry.

    PubMed

    Darzynkiewicz, Zbigniew; Huang, Xuan; Zhao, Hong

    2017-11-01

    Cellular DNA content can be measured by flow cytometry with the aim of : (1) revealing cell distribution within the major phases of the cell cycle, (2) estimating frequency of apoptotic cells with fractional DNA content, and/or (3) disclosing DNA ploidy of the measured cell population. In this unit, simple and universally applicable methods for staining fixed cells are presented, as are methods that utilize detergents and/or proteolytic treatment to permeabilize cells and make DNA accessible to fluorochrome. Additionally, supravital cell staining with Hoechst 33342, which is primarily used for sorting live cells based on DNA-content differences for their subsequent culturing, is described. Also presented are methods for staining cell nuclei isolated from paraffin-embedded tissues. Available algorithms are listed for deconvolution of DNA-content-frequency histograms to estimate percentage of cells in major phases of the cell cycle and frequency of apoptotic cells with fractional DNA content. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley and Sons, Inc.

  9. Region-specific S-wave attenuation for earthquakes in northwestern Iran

    NASA Astrophysics Data System (ADS)

    Heidari, Reza; Mirzaei, Noorbakhsh

    2017-11-01

    In this study, continuous wavelet transform is applied to estimate the frequency-dependent quality factor of shear waves, Q S , in northwestern Iran. The dataset used in this study includes velocigrams of more than 50 events with magnitudes between 4.0 and 6.5, which have occurred in the study area. The CWT-based method shows a high-resolution technique for the estimation of S-wave frequency-dependent attenuation. The quality factor values are determined in the form of a power law as Q S ( f) = (147 ± 16) f 0.71 ± 0.02 and (126 ± 12) f 0.73 ± 0.02 for vertical and horizontal components, respectively, where f is between 0.9 and 12 Hz. Furthermore, in order to verify the reliability of the suggested Q S estimator method, an additional test is performed by using accelerograms of Ahar-Varzaghan dual earthquakes on August 11, 2012, of moment magnitudes 6.4 and 6.3 and their aftershocks. Results indicate that the estimated Q S values from CWT-based method are not very sensitive to the numbers and types of waveforms used (velocity or acceleration).

  10. Multiple linear regression to estimate time-frequency electrophysiological responses in single trials

    PubMed Central

    Hu, L.; Zhang, Z.G.; Mouraux, A.; Iannetti, G.D.

    2015-01-01

    Transient sensory, motor or cognitive event elicit not only phase-locked event-related potentials (ERPs) in the ongoing electroencephalogram (EEG), but also induce non-phase-locked modulations of ongoing EEG oscillations. These modulations can be detected when single-trial waveforms are analysed in the time-frequency domain, and consist in stimulus-induced decreases (event-related desynchronization, ERD) or increases (event-related synchronization, ERS) of synchrony in the activity of the underlying neuronal populations. ERD and ERS reflect changes in the parameters that control oscillations in neuronal networks and, depending on the frequency at which they occur, represent neuronal mechanisms involved in cortical activation, inhibition and binding. ERD and ERS are commonly estimated by averaging the time-frequency decomposition of single trials. However, their trial-to-trial variability that can reflect physiologically-important information is lost by across-trial averaging. Here, we aim to (1) develop novel approaches to explore single-trial parameters (including latency, frequency and magnitude) of ERP/ERD/ERS; (2) disclose the relationship between estimated single-trial parameters and other experimental factors (e.g., perceived intensity). We found that (1) stimulus-elicited ERP/ERD/ERS can be correctly separated using principal component analysis (PCA) decomposition with Varimax rotation on the single-trial time-frequency distributions; (2) time-frequency multiple linear regression with dispersion term (TF-MLRd) enhances the signal-to-noise ratio of ERP/ERD/ERS in single trials, and provides an unbiased estimation of their latency, frequency, and magnitude at single-trial level; (3) these estimates can be meaningfully correlated with each other and with other experimental factors at single-trial level (e.g., perceived stimulus intensity and ERP magnitude). The methods described in this article allow exploring fully non-phase-locked stimulus-induced cortical oscillations, obtaining single-trial estimate of response latency, frequency, and magnitude. This permits within-subject statistical comparisons, correlation with pre-stimulus features, and integration of simultaneously-recorded EEG and fMRI. PMID:25665966

  11. Speech Enhancement Using Gaussian Scale Mixture Models

    PubMed Central

    Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.

    2011-01-01

    This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139

  12. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  13. Research on natural frequency based on modal test for high speed vehicles

    NASA Astrophysics Data System (ADS)

    Ma, Guangsong; He, Guanglin; Guo, Yachao

    2018-04-01

    High speed vehicle as a vibration system, resonance generated in flight may be harmful to high speed vehicles. It is possible to solve the resonance problem by acquiring the natural frequency of the high-speed aircraft and then taking some measures to avoid the natural frequency of the high speed vehicle. Therefore, In this paper, the modal test of the high speed vehicle was carried out by using the running hammer method and the PolyMAX modal parameter identification method. Firstly, the total frequency response function, coherence function of the high speed vehicle are obtained by the running hammer stimulation test, and through the modal assurance criterion (MAC) to determine the accuracy of the estimated parameters. Secondly, the first three order frequencies, the pole steady state diagram of the high speed vehicles is obtained by the PolyMAX modal parameter identification method. At last, the natural frequency of the vibration system was accurately obtained by the running hammer method.

  14. Blade frequency program for nonuniform helicopter rotors, with automated frequency search

    NASA Technical Reports Server (NTRS)

    Sadler, S. G.

    1972-01-01

    A computer program for determining the natural frequencies and normal modes of a lumped parameter model of a rotating, twisted beam, with nonuniform mass and elastic properties was developed. The program is used to solve the conditions existing in a helicopter rotor where the outboard end of the rotor has zero forces and moments. Three frequency search methods have been implemented. Including an automatic search technique, which allows the program to find up to the fifteen lowest natural frequencies without the necessity for input estimates of these frequencies.

  15. Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion

    NASA Astrophysics Data System (ADS)

    Jakobsen, M.; Wu, R. S.

    2016-12-01

    Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.

  16. A new method for finding and characterizing galaxy groups via low-frequency radio surveys

    NASA Astrophysics Data System (ADS)

    Croston, J. H.; Ineson, J.; Hardcastle, M. J.; Mingo, B.

    2017-09-01

    We describe a new method for identifying and characterizing the thermodynamic state of large samples of evolved galaxy groups at high redshifts using high-resolution, low-frequency radio surveys, such as those that will be carried out with LOFAR and the Square Kilometre Array. We identify a sub-population of morphologically regular powerful [Fanaroff-Riley type II (FR II)] radio galaxies and demonstrate that, for this sub-population, the internal pressure of the radio lobes is a reliable tracer of the external intragroup/intracluster medium (ICM) pressure, and that the assumption of a universal pressure profile for relaxed groups enables the total mass and X-ray luminosity to be estimated. Using a sample of well-studied FR II radio galaxies, we demonstrate that our method enables the estimation of group/cluster X-ray luminosities over three orders of magnitude in luminosity to within a factor of ˜2 from low-frequency radio properties alone. Our method could provide a powerful new tool for building samples of thousands of evolved galaxy groups at z > 1 and characterizing their ICM.

  17. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  18. Resonance frequency control of RF normal conducting cavity using gradient estimator of reflected power

    NASA Astrophysics Data System (ADS)

    Leewe, R.; Shahriari, Z.; Moallem, M.

    2017-10-01

    Control of the natural resonance frequency of an RF cavity is essential for accelerator structures due to their high cavity sensitivity to internal and external vibrations and the dependency of resonant frequency on temperature changes. Due to the relatively high radio frequencies involved (MHz to GHz), direct measurement of the resonant frequency for real-time control is not possible by using conventional microcontroller hardware. So far, all operational cavities are tuned using phase comparison techniques. The temperature dependent phase measurements render this technique labor and time intensive. To eliminate the phase measurement, reduce man hours and speed up cavity start up time, this paper presents a control theme that relies solely on the reflected power measurement. The control algorithm for the nonlinear system is developed through Lyapunov's method. The controller stabilizes the resonance frequency of the cavity using a nonlinear control algorithm in combination with a gradient estimation method. Experimental results of the proposed system on a test cavity show that the resonance frequency can be tuned to its optimum operating point while the start up time of a single cavity and the accompanied man hours are significantly decreased. A test result of the fully commissioned control system on one of TRIUMF's DTL tanks verifies its performance under real environmental conditions.

  19. Maximum-Likelihood Estimation for Frequency-Modulated Continuous-Wave Laser Ranging using Photon-Counting Detectors

    DTIC Science & Technology

    2013-03-21

    instruments where frequency estimates are calcu- lated from coherently detected fields, e.g., coherent Doppler LIDAR . Our CRB results reveal that the best...wave coherent lidar using an optical field correlation detection method,” Opt. Rev. 5, 310–314 (1998). 8. H. P. Yuen and V. W. S. Chan, “Noise in...2170–2180 (2007). 13. T. J. Karr, “Atmospheric phase error in coherent laser radar,” IEEE Trans. Antennas Propag. 55, 1122–1133 (2007). 14. Throughout

  20. Weighted network analysis of high-frequency cross-correlation measures

    NASA Astrophysics Data System (ADS)

    Iori, Giulia; Precup, Ovidiu V.

    2007-03-01

    In this paper we implement a Fourier method to estimate high-frequency correlation matrices from small data sets. The Fourier estimates are shown to be considerably less noisy than the standard Pearson correlation measures and thus capable of detecting subtle changes in correlation matrices with just a month of data. The evolution of correlation at different time scales is analyzed from the full correlation matrix and its minimum spanning tree representation. The analysis is performed by implementing measures from the theory of random weighted networks.

  1. GNSS Ephemeris with Graceful Degradation and Measurement Fusion

    NASA Technical Reports Server (NTRS)

    Garrison, James Levi (Inventor); Walker, Michael Allen (Inventor)

    2015-01-01

    A method for providing an extended propagation ephemeris model for a satellite in Earth orbit, the method includes obtaining a satellite's orbital position over a first period of time, applying a least square estimation filter to determine coefficients defining osculating Keplarian orbital elements and harmonic perturbation parameters associated with a coordinate system defining an extended propagation ephemeris model that can be used to estimate the satellite's position during the first period, wherein the osculating Keplarian orbital elements include semi-major axis of the satellite (a), eccentricity of the satellite (e), inclination of the satellite (i), right ascension of ascending node of the satellite (.OMEGA.), true anomaly (.theta.*), and argument of periapsis (.omega.), applying the least square estimation filter to determine a dominant frequency of the true anomaly, and applying a Fourier transform to determine dominant frequencies of the harmonic perturbation parameters.

  2. Projection-based estimation and nonuniformity correction of sensitivity profiles in phased-array surface coils.

    PubMed

    Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook

    2007-03-01

    To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.

  3. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  4. Accurate determination of electronic transport properties of silicon wafers by nonlinear photocarrier radiometry with multiple pump beam sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qian; University of the Chinese Academy of Sciences, Beijing 100039; Li, Bincheng, E-mail: bcli@uestc.ac.cn

    2015-12-07

    In this paper, photocarrier radiometry (PCR) technique with multiple pump beam sizes is employed to determine simultaneously the electronic transport parameters (the carrier lifetime, the carrier diffusion coefficient, and the front surface recombination velocity) of silicon wafers. By employing the multiple pump beam sizes, the influence of instrumental frequency response on the multi-parameter estimation is totally eliminated. A nonlinear PCR model is developed to interpret the PCR signal. Theoretical simulations are performed to investigate the uncertainties of the estimated parameter values by investigating the dependence of a mean square variance on the corresponding transport parameters and compared to that obtainedmore » by the conventional frequency-scan method, in which only the frequency dependences of the PCR amplitude and phase are recorded at single pump beam size. Simulation results show that the proposed multiple-pump-beam-size method can improve significantly the accuracy of the determination of the electronic transport parameters. Comparative experiments with a p-type silicon wafer with resistivity 0.1–0.2 Ω·cm are performed, and the electronic transport properties are determined simultaneously. The estimated uncertainties of the carrier lifetime, diffusion coefficient, and front surface recombination velocity are approximately ±10.7%, ±8.6%, and ±35.4% by the proposed multiple-pump-beam-size method, which is much improved than ±15.9%, ±29.1%, and >±50% by the conventional frequency-scan method. The transport parameters determined by the proposed multiple-pump-beam-size PCR method are in good agreement with that obtained by a steady-state PCR imaging technique.« less

  5. Applying time-frequency analysis to assess cerebral autoregulation during hypercapnia.

    PubMed

    Placek, Michał M; Wachel, Paweł; Iskander, D Robert; Smielewski, Peter; Uryga, Agnieszka; Mielczarek, Arkadiusz; Szczepański, Tomasz A; Kasprowicz, Magdalena

    2017-01-01

    Classic methods for assessing cerebral autoregulation involve a transfer function analysis performed using the Fourier transform to quantify relationship between fluctuations in arterial blood pressure (ABP) and cerebral blood flow velocity (CBFV). This approach usually assumes the signals and the system to be stationary. Such an presumption is restrictive and may lead to unreliable results. The aim of this study is to present an alternative method that accounts for intrinsic non-stationarity of cerebral autoregulation and the signals used for its assessment. Continuous recording of CBFV, ABP, ECG, and end-tidal CO2 were performed in 50 young volunteers during normocapnia and hypercapnia. Hypercapnia served as a surrogate of the cerebral autoregulation impairment. Fluctuations in ABP, CBFV, and phase shift between them were tested for stationarity using sphericity based test. The Zhao-Atlas-Marks distribution was utilized to estimate the time-frequency coherence (TFCoh) and phase shift (TFPS) between ABP and CBFV in three frequency ranges: 0.02-0.07 Hz (VLF), 0.07-0.20 Hz (LF), and 0.20-0.35 Hz (HF). TFPS was estimated in regions locally validated by statistically justified value of TFCoh. The comparison of TFPS with spectral phase shift determined using transfer function approach was performed. The hypothesis of stationarity for ABP and CBFV fluctuations and the phase shift was rejected. Reduced TFPS was associated with hypercapnia in the VLF and the LF but not in the HF. Spectral phase shift was also decreased during hypercapnia in the VLF and the LF but increased in the HF. Time-frequency method led to lower dispersion of phase estimates than the spectral method, mainly during normocapnia in the VLF and the LF. The time-frequency method performed no worse than the classic one and yet may offer benefits from lower dispersion of phase shift as well as a more in-depth insight into the dynamic nature of cerebral autoregulation.

  6. Measuring multi-joint stiffness during single movements: numerical validation of a novel time-frequency approach.

    PubMed

    Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R

    2012-01-01

    This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases.

  7. A note on Hardy-Weinberg equilibrium of VNTR data by using the Federal Bureau of Investigation's fixed-bin method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devlin, B.; Risch, N.

    1992-09-01

    To fully utilize the information of VNTR data for forensic inference, the probability of observing the matching suspect and evidentiary profile in a reference population is estimated, usually by assuming independence of alleles within and between loci. This assumption has been challenged on the basis of the observation that there is frequently an excess of single-band phenotypes (SBP) in forensic data bases, which could indicate lack of independence. Nevertheless, another explanation is that the excess SBP are artifacts of laboratory methods. In this report the authors examine the excess of SBP for three VNTR loci studied by the FBI (D17S79more » and D2S44, for blacks, and D14S13, for Caucasians). The FBI claims that the excess is due to the effect of null alleles; the null alleles are suspected to be small to be detected. The authors estimate the frequency of null alleles for two loci (D17S79 and D14A13) by comparing, for these loci, the data from the FBI data base and the data from the Lifecodes data base. These comparisons yield information on small fragments because Lifecodes uses the restriction enzyme PstI, which yields larger fragments than does HaeIII, which the FBI uses. For D17S19 in blacks, the authors estimate a null allele frequency of 4.4%, and, for D14S13 in Caucasians, they estimate a frequency of 3.0%. The null-allele frequency for D2S44 in blacks is derived similarly, again being based on analysis of DNA cut with HaeIII and PstI; the estimate of the null-allele frequency for this locus is 1.5%. Using these null-allele frequency estimates and a goodness-of-fit test, the authors show that there is no evidence for deviations from Hardy-Weinberg expectations of genotype probabilities at these loci. 20 refs., 1 fig.« less

  8. Computation of rainfall erosivity from daily precipitation amounts.

    PubMed

    Beguería, Santiago; Serrano-Notivoli, Roberto; Tomas-Burguera, Miquel

    2018-10-01

    Rainfall erosivity is an important parameter in many erosion models, and the EI30 defined by the Universal Soil Loss Equation is one of the best known erosivity indices. One issue with this and other erosivity indices is that they require continuous breakpoint, or high frequency time interval, precipitation data. These data are rare, in comparison to more common medium-frequency data, such as daily precipitation data commonly recorded by many national and regional weather services. Devising methods for computing estimates of rainfall erosivity from daily precipitation data that are comparable to those obtained by using high-frequency data is, therefore, highly desired. Here we present a method for producing such estimates, based on optimal regression tools such as the Gamma Generalised Linear Model and universal kriging. Unlike other methods, this approach produces unbiased and very close to observed EI30, especially when these are aggregated at the annual level. We illustrate the method with a case study comprising more than 1500 high-frequency precipitation records across Spain. Although the original records have a short span (the mean length is around 10 years), computation of spatially-distributed upscaling parameters offers the possibility to compute high-resolution climatologies of the EI30 index based on currently available, long-span, daily precipitation databases. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Evaluation of a validated food frequency questionnaire for self-defined vegans in the United States.

    PubMed

    Dyett, Patricia; Rajaram, Sujatha; Haddad, Ella H; Sabate, Joan

    2014-07-08

    This study aimed to develop and validate a de novo food frequency questionnaire for self-defined vegans in the United States. Diet histories from pilot samples of vegans and a modified 'Block Method' using seven selected nutrients of concern in vegan diet patterns, were employed to generate the questionnaire food list. Food frequency responses of 100 vegans from 19 different U.S. states were obtained via completed mailed questionnaires and compared to multiple telephone-conducted diet recall interviews. Computerized diet analyses were performed. Correlation coefficients, t-tests, rank, cross-tabulations, and probability tests were used to validate and compare intake estimates and dietary reference intake (DRI) assessment trends between the two methods. A 369-item vegan-specific questionnaire was developed with 252 listed food frequency items. Calorie-adjusted correlation coefficients ranged from r = 0.374 to 0.600 (p < 0.001) for all analyzed nutrients except calcium. Estimates, ranks, trends and higher-level participant percentile placements for Vitamin B12 were similar with both methods. Questionnaire intakes were higher than recalls for most other nutrients. Both methods demonstrated similar trends in DRI adequacy assessment (e.g., significantly inadequate vitamin D intake among vegans). This vegan-specific questionnaire can be a useful assessment tool for health screening initiatives in U.S. vegan communities.

  10. Maximum likelihood estimation of linkage disequilibrium in half-sib families.

    PubMed

    Gomez-Raya, L

    2012-05-01

    Maximum likelihood methods for the estimation of linkage disequilibrium between biallelic DNA-markers in half-sib families (half-sib method) are developed for single and multifamily situations. Monte Carlo computer simulations were carried out for a variety of scenarios regarding sire genotypes, linkage disequilibrium, recombination fraction, family size, and number of families. A double heterozygote sire was simulated with recombination fraction of 0.00, linkage disequilibrium among dams of δ=0.10, and alleles at both markers segregating at intermediate frequencies for a family size of 500. The average estimates of δ were 0.17, 0.25, and 0.10 for Excoffier and Slatkin (1995), maternal informative haplotypes, and the half-sib method, respectively. A multifamily EM algorithm was tested at intermediate frequencies by computer simulation. The range of the absolute difference between estimated and simulated δ was between 0.000 and 0.008. A cattle half-sib family was genotyped with the Illumina 50K BeadChip. There were 314,730 SNP pairs for which the sire was a homo-heterozygote with average estimates of r2 of 0.115, 0.067, and 0.111 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. There were 208,872 SNP pairs for which the sire was double heterozygote with average estimates of r2 across the genome of 0.100, 0.267, and 0.925 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. Genome analyses for all possible sire genotypes with 829,042 tests showed that ignoring half-sib family structure leads to upward biased estimates of linkage disequilibrium. Published inferences on population structure and evolution of cattle should be revisited after accommodating existing half-sib family structure in the estimation of linkage disequilibrium.

  11. Maximum Likelihood Estimation of Linkage Disequilibrium in Half-Sib Families

    PubMed Central

    Gomez-Raya, L.

    2012-01-01

    Maximum likelihood methods for the estimation of linkage disequilibrium between biallelic DNA-markers in half-sib families (half-sib method) are developed for single and multifamily situations. Monte Carlo computer simulations were carried out for a variety of scenarios regarding sire genotypes, linkage disequilibrium, recombination fraction, family size, and number of families. A double heterozygote sire was simulated with recombination fraction of 0.00, linkage disequilibrium among dams of δ = 0.10, and alleles at both markers segregating at intermediate frequencies for a family size of 500. The average estimates of δ were 0.17, 0.25, and 0.10 for Excoffier and Slatkin (1995), maternal informative haplotypes, and the half-sib method, respectively. A multifamily EM algorithm was tested at intermediate frequencies by computer simulation. The range of the absolute difference between estimated and simulated δ was between 0.000 and 0.008. A cattle half-sib family was genotyped with the Illumina 50K BeadChip. There were 314,730 SNP pairs for which the sire was a homo-heterozygote with average estimates of r2 of 0.115, 0.067, and 0.111 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. There were 208,872 SNP pairs for which the sire was double heterozygote with average estimates of r2 across the genome of 0.100, 0.267, and 0.925 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. Genome analyses for all possible sire genotypes with 829,042 tests showed that ignoring half-sib family structure leads to upward biased estimates of linkage disequilibrium. Published inferences on population structure and evolution of cattle should be revisited after accommodating existing half-sib family structure in the estimation of linkage disequilibrium. PMID:22377635

  12. Methods for peak-flow frequency analysis and reporting for streamgages in or near Montana based on data through water year 2015

    USGS Publications Warehouse

    Sando, Steven K.; McCarthy, Peter M.

    2018-05-10

    This report documents the methods for peak-flow frequency (hereinafter “frequency”) analysis and reporting for streamgages in and near Montana following implementation of the Bulletin 17C guidelines. The methods are used to provide estimates of peak-flow quantiles for 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for selected streamgages operated by the U.S. Geological Survey Wyoming-Montana Water Science Center (WY–MT WSC). These annual exceedance probabilities correspond to 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Standard procedures specific to the WY–MT WSC for implementing the Bulletin 17C guidelines include (1) the use of the Expected Moments Algorithm analysis for fitting the log-Pearson Type III distribution, incorporating historical information where applicable; (2) the use of weighted skew coefficients (based on weighting at-site station skew coefficients with generalized skew coefficients from the Bulletin 17B national skew map); and (3) the use of the Multiple Grubbs-Beck Test for identifying potentially influential low flows. For some streamgages, the peak-flow records are not well represented by the standard procedures and require user-specified adjustments informed by hydrologic judgement. The specific characteristics of peak-flow records addressed by the informed-user adjustments include (1) regulated peak-flow records, (2) atypical upper-tail peak-flow records, and (3) atypical lower-tail peak-flow records. In all cases, the informed-user adjustments use the Expected Moments Algorithm fit of the log-Pearson Type III distribution using the at-site station skew coefficient, a manual potentially influential low flow threshold, or both.Appropriate methods can be applied to at-site frequency estimates to provide improved representation of long-term hydroclimatic conditions. The methods for improving at-site frequency estimates by weighting with regional regression equations and by Maintenance of Variance Extension Type III record extension are described.Frequency analyses were conducted for 99 example streamgages to indicate various aspects of the frequency-analysis methods described in this report. The frequency analyses and results for the example streamgages are presented in a separate data release associated with this report consisting of tables and graphical plots that are structured to include information concerning the interpretive decisions involved in the frequency analyses. Further, the separate data release includes the input files to the PeakFQ program, version 7.1, including the peak-flow data file and the analysis specification file that were used in the peak-flow frequency analyses. Peak-flow frequencies are also reported in separate data releases for selected streamgages in the Beaverhead River and Clark Fork Basins and also for selected streamgages in the Ruby, Jefferson, and Madison River Basins.

  13. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  14. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  15. A non-destructive method for quantifying small-diameter woody biomass in southern pine forests

    Treesearch

    D. Andrew Scott; Rick Stagg; Morris Smith

    2006-01-01

    Quantifying the impact of silvicultural treatments on woody understory vegetation largely has been accomplished by destructive sampling or through estimates of frequency and coverage. In studies where repeated measures of understory biomass across large areas are needed, destructive sampling and percent cover estimates are not satisfactory. For example, estimates of...

  16. Estimation and applicability of attenuation characteristics for source parameters and scaling relations in the Garhwal Kumaun Himalaya region, India

    NASA Astrophysics Data System (ADS)

    Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.

    2018-06-01

    Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections. Although, the scaling can be improved further with the integration of large dataset of microearthquakes and use of a stable and robust approach.

  17. Fast Computation of Frequency Response of Cavity-Backed Apertures Using MBPE in Conjunction with Hybrid FEM/MoM Technique

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.; Deshpande, M. D.; Cockrell, C. R.; Beck, F. B.

    2004-01-01

    The hybrid Finite Element Method(FEM)/Method of Moments(MoM) technique has become popular over the last few years due to its flexibility to handle arbitrarily shaped objects with complex materials. One of the disadvantages of this technique, however, is the computational cost involved in obtaining solutions over a frequency range as computations are repeated for each frequency. In this paper, the application of Model Based Parameter Estimation (MBPE) method[1] with the hybrid FEM/MoM technique is presented for fast computation of frequency response of cavity-backed apertures[2,3]. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency-derivatives of the integro-differential equation formed by the hybrid FEM/MoM technique. Using the rational function approximation, the electric field is calculated at different frequencies from which the frequency response is obtained.

  18. ABO, Rhesus, and Kell Antigens, Alleles, and Haplotypes in West Bengal, India

    PubMed Central

    Basu, Debapriya; Datta, Suvro Sankha; Montemayor, Celina; Bhattacharya, Prasun; Mukherjee, Krishnendu; Flegel, Willy A.

    2018-01-01

    Background Few studies have documented the blood group antigens in the population of eastern India. Frequencies of some common alleles and haplotypes were unknown. We describe phenotype, allele, and haplotype frequencies in the state of West Bengal, India. Methods We tested 1,528 blood donors at the Medical College Hospital, Kolkata. The common antigens of the ABO, Rhesus, and Kell blood group systems were determined by standard serologic methods in tubes. Allele and haplotype frequencies were calculated with an iterative method that yielded maximum-likelihood estimates under the assumption of a Hardy-Weinberg equilibrium. Results The prevalence of ABO antigens were B (34%), O (32%), A (25%), and AB (9%) with ABO allele frequencies for O = 0.567, A = 0.189, and B = 0.244. The D antigen (RH1) was observed in 96.6% of the blood donors with RH haplotype frequencies, such as for CDe = 0.688809, cde = 0.16983 and CdE = 0.000654. The K antigen (K1) was observed in 12 donors (0.79%) with KEL allele frequencies for K = 0.004 and k = 0.996. Conclusions: For the Bengali population living in the south of West Bengal, we established the frequencies of the major clinically relevant antigens in the ABO, Rhesus, and Kell blood group systems and derived estimates for the underlying ABO and KEL alleles and RH haplotypes. Such blood donor screening will improve the availability of compatible red cell units for transfusion. Our approach using widely available routine methods can readily be applied in other regions, where the sufficient supply of blood typed for the Rh and K antigens is lacking. PMID:29593462

  19. The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2013-07-21

    Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Application of Kalman filter in frequency offset estimation for coherent optical quadrature phase-shift keying communication system

    NASA Astrophysics Data System (ADS)

    Jiang, Wen; Yang, Yanfu; Zhang, Qun; Sun, Yunxu; Zhong, Kangping; Zhou, Xian; Yao, Yong

    2016-09-01

    The frequency offset estimation (FOE) schemes based on Kalman filter are proposed and investigated in detail via numerical simulation and experiment. The schemes consist of a modulation phase removing stage and Kalman filter estimation stage. In the second stage, the Kalman filters are employed for tracking either differential angles or differential data between two successive symbols. Several implementations of the proposed FOE scheme are compared by employing different modulation removing methods and two Kalman algorithms. The optimal FOE implementation is suggested for different operating conditions including optical signal-to-noise ratio and the number of the available data symbols.

  1. A method to validate quantitative high-frequency power doppler ultrasound with fluorescence in vivo video microscopy.

    PubMed

    Pinter, Stephen Z; Kim, Dae-Ro; Hague, M Nicole; Chambers, Ann F; MacDonald, Ian C; Lacefield, James C

    2014-08-01

    Flow quantification with high-frequency (>20 MHz) power Doppler ultrasound can be performed objectively using the wall-filter selection curve (WFSC) method to select the cutoff velocity that yields a best-estimate color pixel density (CPD). An in vivo video microscopy system (IVVM) is combined with high-frequency power Doppler ultrasound to provide a method for validation of CPD measurements based on WFSCs in mouse testicular vessels. The ultrasound and IVVM systems are instrumented so that the mouse remains on the same imaging platform when switching between the two modalities. In vivo video microscopy provides gold-standard measurements of vascular diameter to validate power Doppler CPD estimates. Measurements in four image planes from three mice exhibit wide variation in the optimal cutoff velocity and indicate that a predetermined cutoff velocity setting can introduce significant errors in studies intended to quantify vascularity. Consistent with previously published flow-phantom data, in vivo WFSCs exhibited three characteristic regions and detectable plateaus. Selection of a cutoff velocity at the right end of the plateau yielded a CPD close to the gold-standard vascular volume fraction estimated using IVVM. An investigator can implement the WFSC method to help adapt cutoff velocity to current blood flow conditions and thereby improve the accuracy of power Doppler for quantitative microvascular imaging. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  2. A Frequency-Domain Multipath Parameter Estimation and Mitigation Method for BOC-Modulated GNSS Signals

    PubMed Central

    Sun, Chao; Feng, Wenquan; Du, Songlin

    2018-01-01

    As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589

  3. Note: Demodulation of spectral signal modulated by optical chopper with unstable modulation frequency.

    PubMed

    Zhang, Shengzhao; Li, Gang; Wang, Jiexi; Wang, Donggen; Han, Ying; Cao, Hui; Lin, Ling; Diao, Chunhong

    2017-10-01

    When an optical chopper is used to modulate the light source, the rotating speed of the wheel may vary with time and subsequently cause jitter of the modulation frequency. The amplitude calculated from the modulated signal would be distorted when the frequency fluctuations occur. To precisely calculate the amplitude of the modulated light flux, we proposed a method to estimate the range of the frequency fluctuation in the measurement of the spectrum and then extract the amplitude based on the sum of power of the signal in the selected frequency range. Experiments were designed to test the feasibility of the proposed method and the results showed lower root means square error than the conventional way.

  4. Estimation of size of red blood cell aggregates using backscattering property of high-frequency ultrasound: In vivo evaluation

    NASA Astrophysics Data System (ADS)

    Kurokawa, Yusaku; Taki, Hirofumi; Yashiro, Satoshi; Nagasawa, Kan; Ishigaki, Yasushi; Kanai, Hiroshi

    2016-07-01

    We propose a method for assessment of the degree of red blood cell (RBC) aggregation using the backscattering property of high-frequency ultrasound. In this method, the scattering property of RBCs is extracted from the power spectrum of RBC echoes normalized by that from the posterior wall of a vein. In an experimental study using a phantom, employing the proposed method, the sizes of microspheres 5 and 20 µm in diameter were estimated to have mean values of 4.7 and 17.3 µm and standard deviations of 1.9 and 1.4 µm, respectively. In an in vivo experimental study, we compared the results between three healthy subjects and four diabetic patients. The average estimated scatterer diameters in healthy subjects at rest and during avascularization were 7 and 28 µm, respectively. In contrast, those in diabetic patients receiving both antithrombotic therapy and insulin therapy were 11 and 46 µm, respectively. These results show that the proposed method has high potential for clinical application to assess RBC aggregation, which may be related to the progress of diabetes.

  5. Tailored Excitation for Frequency Response Measurement Applied to the X-43A Flight Vehicle

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan

    2007-01-01

    An important aspect of any flight research project is assessing aircraft stability and flight control performance. In some programs this assessment is accomplished through the estimation of the in-flight vehicle frequency response. This estimation has traditionally been a lengthy task requiring separate swept sine inputs for each control axis at a constant flight condition. Hypersonic vehicles spend little time at any specific flight condition while they are decelerating. Accordingly, it is difficult to use traditional methods to calculate the vehicle frequency response and stability margins for this class of vehicle. A technique has been previously developed to significantly reduce the duration of the excitation input by tailoring the input to excite only the frequency range of interest. Reductions in test time were achieved by simultaneously applying tailored excitation signals to multiple control loops, allowing a quick estimate of the frequency response of a particular aircraft. This report discusses the flight results obtained from applying a tailored excitation input to the X-43A longitudinal and lateral-directional control loops during the second and third flights. The frequency responses and stability margins obtained from flight data are compared with preflight predictions.

  6. Experimental demonstration of a 16.9 Gb/s link for coherent OFDM PON robust to frequency offset and timing error

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Liu, Yu; Xiang, Yuanjiang

    2018-07-01

    Due to its merits of flexible bandwidth allocation and robustness towards fiber transmission impairments, coherent optical orthogonal frequency division multiplexing (CO-OFDM) technology draws a lot of attention for passive optical networks (PON). However, a CO-OFDM system is vulnerable to frequency offsets between modulated optical signals and optical local oscillators (OLO). This is particularly serious for low cost PONs where low cost lasers are used. Thus, it is of great interest to develop efficient algorithms for frequency synchronization in CO-OFDM systems. Usually frequency synchronization proposed in CO-OFDM systems are done by detecting the phase shift in time domain. In such a way, there is a trade-off between estimation accuracy and range. Considering that the integer frequency offset (IFO) contributes to the major frequency offset, a more efficient method to estimate IFO is of demand. By detecting IFO induced circular channel rotation (CCR), the frequency offset can be directly estimated after fast Fourier transforming (FFT). In this paper, circular acquisition offset frequency and timing synchronization (CAO-FTS) scheme is proposed. A specially-designed frequency domain pseudo noise (PN) sequence is used for CCR detection and timing synchronization. Full-range frequency offset compensation and non-plateau timing synchronization are experimentally demonstrated in presence of fiber dispersion. Based on CAO-FTS, 16.9 Gb/s CO-OFDM signal is successfully delivered over a span of 80-km single mode fiber.

  7. Joint Bearing and Range Estimation of Multiple Objects from Time-Frequency Analysis.

    PubMed

    Liu, Jeng-Cheng; Cheng, Yuang-Tung; Hung, Hsien-Sen

    2018-01-19

    Direction-of-arrival (DOA) and range estimation is an important issue of sonar signal processing. In this paper, a novel approach using Hilbert-Huang transform (HHT) is proposed for joint bearing and range estimation of multiple targets based on a uniform linear array (ULA) of hydrophones. The structure of this ULA based on micro-electro-mechanical systems (MEMS) technology, and thus has attractive features of small size, high sensitivity and low cost, and is suitable for Autonomous Underwater Vehicle (AUV) operations. This proposed target localization method has the following advantages: only a single snapshot of data is needed and real-time processing is feasible. The proposed algorithm transforms a very complicated nonlinear estimation problem to a simple nearly linear one via time-frequency distribution (TFD) theory and is verified with HHT. Theoretical discussions of resolution issue are also provided to facilitate the design of a MEMS sensor with high sensitivity. Simulation results are shown to verify the effectiveness of the proposed method.

  8. Wideband Direction of Arrival Estimation in the Presence of Unknown Mutual Coupling

    PubMed Central

    Li, Weixing; Zhang, Yue; Lin, Jianzhi; Guo, Rui; Chen, Zengping

    2017-01-01

    This paper investigates a subarray based algorithm for direction of arrival (DOA) estimation of wideband uniform linear array (ULA), under the presence of frequency-dependent mutual coupling effects. Based on the Toeplitz structure of mutual coupling matrices, the whole array is divided into the middle subarray and the auxiliary subarray. Then two-sided correlation transformation is applied to the correlation matrix of the middle subarray instead of the whole array. In this way, the mutual coupling effects can be eliminated. Finally, the multiple signal classification (MUSIC) method is utilized to derive the DOAs. For the condition when the blind angles exist, we refine DOA estimation by using a simple approach based on the frequency-dependent mutual coupling matrixes (MCMs). The proposed method can achieve high estimation accuracy without any calibration sources. It has a low computational complexity because iterative processing is not required. Simulation results validate the effectiveness and feasibility of the proposed algorithm. PMID:28178177

  9. Experimental measure of arm stiffness during single reaching movements with a time-frequency analysis

    PubMed Central

    Pierobon, Alberto; DiZio, Paul; Lackner, James R.

    2013-01-01

    We tested an innovative method to estimate joint stiffness and damping during multijoint unfettered arm movements. The technique employs impulsive perturbations and a time-frequency analysis to estimate the arm's mechanical properties along a reaching trajectory. Each single impulsive perturbation provides a continuous estimation on a single-reach basis, making our method ideal to investigate motor adaptation in the presence of force fields and to study the control of movement in impaired individuals with limited kinematic repeatability. In contrast with previous dynamic stiffness studies, we found that stiffness varies during movement, achieving levels higher than during static postural control. High stiffness was associated with elevated reflexive activity. We observed a decrease in stiffness and a marked reduction in long-latency reflexes around the reaching movement velocity peak. This pattern could partly explain the difference between the high stiffness reported in postural studies and the low stiffness measured in dynamic estimation studies, where perturbations are typically applied near the peak velocity point. PMID:23945781

  10. Application of at-site peak-streamflow frequency analyses for very low annual exceedance probabilities

    USGS Publications Warehouse

    Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.

    2017-07-17

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.

  11. Ambient Vibration Testing for Story Stiffness Estimation of a Heritage Timber Building

    PubMed Central

    Min, Kyung-Won; Kim, Junhee; Park, Sung-Ah; Park, Chan-Soo

    2013-01-01

    This paper investigates dynamic characteristics of a historic wooden structure by ambient vibration testing, presenting a novel estimation methodology of story stiffness for the purpose of vibration-based structural health monitoring. As for the ambient vibration testing, measured structural responses are analyzed by two output-only system identification methods (i.e., frequency domain decomposition and stochastic subspace identification) to estimate modal parameters. The proposed methodology of story stiffness is estimation based on an eigenvalue problem derived from a vibratory rigid body model. Using the identified natural frequencies, the eigenvalue problem is efficiently solved and uniquely yields story stiffness. It is noteworthy that application of the proposed methodology is not necessarily confined to the wooden structure exampled in the paper. PMID:24227999

  12. Application of wavelet analysis to estimation of parameters of the gravitational-wave signal from a coalescing binary

    NASA Astrophysics Data System (ADS)

    Królak, Andrzej; Trzaskoma, Pawel

    1996-05-01

    Application of wavelet analysis to the estimation of parameters of the broad-band gravitational-wave signal emitted by a binary system is investigated. A method of instantaneous frequency extraction first proposed in this context by Innocent and Vinet is used. The gravitational-wave signal from a binary is investigated from the point of view of signal analysis theory and it is shown that such a signal is characterized by a large time - bandwidth product. This property enables the extraction of frequency modulation from the wavelet transform of the signal. The wavelet transform of the chirp signal from a binary is calculated analytically. Numerical simulations with the noisy chirp signal are performed. The gravitational-wave signal from a binary is taken in the quadrupole approximation and it is buried in noise corresponding to three different values of the signal-to-noise ratio and the wavelet method to extract the frequency modulation of the signal is applied. Then, from the frequency modulation, the chirp mass parameter of the binary is estimated. It is found that the chirp mass can be estimated to a good accuracy, typically of the order of (20/0264-9381/13/5/006/img5% where 0264-9381/13/5/006/img6 is the optimal signal-to-noise ratio. It is also shown that the post-Newtonian effects in the gravitational wave signal from a binary can be discriminated to a satisfactory accuracy.

  13. Visual Estimation of Bacterial Growth Level in Microfluidic Culture Systems.

    PubMed

    Kim, Kyukwang; Kim, Seunggyu; Jeon, Jessie S

    2018-02-03

    Microfluidic devices are an emerging platform for a variety of experiments involving bacterial cell culture, and has advantages including cost and convenience. One inevitable step during bacterial cell culture is the measurement of cell concentration in the channel. The optical density measurement technique is generally used for bacterial growth estimation, but it is not applicable to microfluidic devices due to the small sample volumes in microfluidics. Alternately, cell counting or colony-forming unit methods may be applied, but these do not work in situ; nor do these methods show measurement results immediately. To this end, we present a new vision-based method to estimate the growth level of the bacteria in microfluidic channels. We use Fast Fourier transform (FFT) to detect the frequency level change of the microscopic image, focusing on the fact that the microscopic image becomes rough as the number of cells in the field of view increases, adding high frequencies to the spectrum of the image. Two types of microfluidic devices are used to culture bacteria in liquid and agar gel medium, and time-lapsed images are captured. The images obtained are analyzed using FFT, resulting in an increase in high-frequency noise proportional to the time passed. Furthermore, we apply the developed method in the microfluidic antibiotics susceptibility test by recognizing the regional concentration change of the bacteria that are cultured in the antibiotics gradient. Finally, a deep learning-based data regression is performed on the data obtained by the proposed vision-based method for robust reporting of data.

  14. Using Empirical Mode Decomposition to process Marine Magnetotelluric Data

    NASA Astrophysics Data System (ADS)

    Chen, J.; Jegen, M. D.; Heincke, B. H.; Moorkamp, M.

    2014-12-01

    The magnetotelluric (MT) data always exhibits nonstationarities due to variations of source mechanisms causing MT variations on different time and spatial scales. An additional non-stationary component is introduced through noise, which is particularly pronounced in marine MT data through motion induced noise caused by time-varying wave motion and currents. We present a new heuristic method for dealing with the non-stationarity of MT time series based on Empirical Mode Decomposition (EMD). The EMD method is used in combination with the derived instantaneous spectra to determine impedance estimates. The procedure is tested on synthetic and field MT data. In synthetic tests the reliability of impedance estimates from EMD-based method is compared to the synthetic responses of a 1D layered model. To examine how estimates are affected by noise, stochastic stationary and non-stationary noise are added on the time series. Comparisons reveal that estimates by the EMD-based method are generally more stable than those by simple Fourier analysis. Furthermore, the results are compared to those derived by a commonly used Fourier-based MT data processing software (BIRRP), which incorporates additional sophisticated robust estimations to deal with noise issues. It is revealed that the results from both methods are already comparable, even though no robust estimate procedures are implemented in the EMD approach at present stage. The processing scheme is then applied to marine MT field data. Testing is performed on short, relatively quiet segments of several data sets, as well as on long segments of data with many non-stationary noise packages. Compared to BIRRP, the new method gives comparable or better impedance estimates, furthermore, the estimates are extended to lower frequencies and less noise biased estimates with smaller error bars are obtained at high frequencies. The new processing methodology represents an important step towards deriving a better resolved Earth model to greater depth underneath the seafloor.

  15. The Value Estimation of an HFGW Frequency Time Standard for Telecommunications Network Optimization

    NASA Astrophysics Data System (ADS)

    Harper, Colby; Stephenson, Gary

    2007-01-01

    The emerging technology of gravitational wave control is used to augment a communication system using a development roadmap suggested in Stephenson (2003) for applications emphasized in Baker (2005). In the present paper consideration is given to the value of a High Frequency Gravitational Wave (HFGW) channel purely as providing a method of frequency and time reference distribution for use within conventional Radio Frequency (RF) telecommunications networks. Specifically, the native value of conventional telecommunications networks may be optimized by using an unperturbed frequency time standard (FTS) to (1) improve terminal navigation and Doppler estimation performance via improved time difference of arrival (TDOA) from a universal time reference, and (2) improve acquisition speed, coding efficiency, and dynamic bandwidth efficiency through the use of a universal frequency reference. A model utilizing a discounted cash flow technique provides an estimation of the additional value using HFGW FTS technology could bring to a mixed technology HFGW/RF network. By applying a simple net present value analysis with supporting reference valuations to such a network, it is demonstrated that an HFGW FTS could create a sizable improvement within an otherwise conventional RF telecommunications network. Our conservative model establishes a low-side value estimate of approximately 50B USD Net Present Value for an HFGW FTS service, with reasonable potential high-side values to significant multiples of this low-side value floor.

  16. Improved calibration technique for in vivo proton MRS thermometry for brain temperature measurement.

    PubMed

    Zhu, M; Bashir, A; Ackerman, J J; Yablonskiy, D A

    2008-09-01

    The most common MR-based approach to noninvasively measure brain temperature relies on the linear relationship between the (1)H MR resonance frequency of tissue water and the tissue's temperature. Herein we provide the most accurate in vivo assessment existing thus far of such a relationship. It was derived by acquiring in vivo MR spectra from a rat brain using a high field (11.74 Tesla [T]) MRI scanner and a single-voxel MR spectroscopy technique based on a LASER pulse sequence. Data were analyzed using three different methods to estimate the (1)H resonance frequencies of water and the metabolites NAA, Cho, and Cr, which are used as temperature-independent internal (frequency) references. Standard modeling of frequency-domain data as composed of resonances characterized by Lorentzian line shapes gave the tightest resonance-frequency versus temperature correlation. An analysis of the uncertainty in temperature estimation has shown that the major limiting factor is an error in estimating the metabolite frequency. For example, for a metabolite resonance linewidth of 8 Hz, signal sampling rate of 2 Hz and SNR of 5, an accuracy of approximately 0.5 degrees C can be achieved at a magnetic field of 3T. For comparison, in the current study conducted at 11.74T, the temperature estimation error was approximately 0.1 degrees C.

  17. Models and methods to characterize site amplification from a pair of records

    USGS Publications Warehouse

    Safak, E.

    1997-01-01

    The paper presents a tutorial review of the models and methods that are used to characterize site amplification from the pairs of rock- and soil-site records, and introduces some new techniques with better theoretical foundations. The models and methods discussed include spectral and cross-spectral ratios, spectral ratios for downhole records, response spectral ratios, constant amplification factors, parametric models, physical models, and time-varying filters. An extensive analytical and numerical error analysis of spectral and cross-spectral ratios shows that probabilistically cross-spectral ratios give more reliable estimates of site amplification. Spectral ratios should not be used to determine site amplification from downhole-surface recording pairs because of the feedback in the downhole sensor. Response spectral ratios are appropriate for low frequencies, but overestimate the amplification at high frequencies. The best method to be used depends on how much precision is required in the estimates.

  18. Shear Wave Wavefront Mapping Using Ultrasound Color Flow Imaging.

    PubMed

    Yamakoshi, Yoshiki; Kasahara, Toshihiro; Iijima, Tomohiro; Yuminaka, Yasushi

    2015-10-01

    A wavefront reconstruction method for a continuous shear wave is proposed. The method uses ultrasound color flow imaging (CFI) to detect the shear wave's wavefront. When the shear wave vibration frequency satisfies the required frequency condition and the displacement amplitude satisfies the displacement amplitude condition, zero and maximum flow velocities appear at the shear wave vibration phases of zero and π rad, respectively. These specific flow velocities produce the shear wave's wavefront map in CFI. An important feature of this method is that the shear wave propagation is observed in real time without addition of extra functions to the ultrasound imaging system. The experiments are performed using a 6.5 MHz CFI system. The shear wave is excited by a multilayer piezoelectric actuator. In a phantom experiment, the shear wave velocities estimated using the proposed method and those estimated using a system based on displacement measurement show good agreement. © The Author(s) 2015.

  19. Sea level estimate from multi-frequency signal-to-noise ratio data collected by a single geodetic receiver

    NASA Astrophysics Data System (ADS)

    Roussel, Nicolas; Frappart, Frédéric; Ramillien, Guillaume; Darrozes, José; Cornu, Gwendolyne; Koummarasy, Khanithalath

    2016-04-01

    GNSS-Reflectometry (GNSS-R) altimetry has demonstrated a strong potential for sea level monitoring. Interference Pattern Technique (IPT) based on the analysis of the Signal-to-Noise Ratio (SNR) estimated by a GNSS receiver, presents the main advantage of being applicable everywhere by using a single geodetic antenna and receiver, transforming them to real tide gauges. Such a technique has already been tested in various configurations of acquisition of surface-reflected GNSS signals with an accuracy of a few centimeters. Nevertheless, the classical SNR analysis method for estimating the reflecting surface-antenna height is limited by an approximation: the vertical velocity of the reflecting surface must be negligible. Authors present a significant improvement of the SNR technique to solve this problem and broaden the scope of SNR-based tide monitoring. The performances achieved on the different GNSS frequency band (L1, L2 and L5) are analyzed. The method is based on a Least-Mean Square Resolution Method (LSM), combining simultaneous measurements from different GNSS constellations (GPS, GLONASS), which permits to take the dynamic of the surface into account. It was validated in situ [1], with an antenna placed at 60 meters above the Atlantic Ocean surface with variations reaching ±3 meters, and amplitude rate of the semi-diurnal tide up to 0.5 mm/s. Over the three months of SNR records on L1 frequency band for sea level determination, we found linear correlations of 0.94 by comparing with a classical tide gauge record. Our SNR-based time series was also compared to a tide theoretical model and amplitudes and phases of the main astronomical periods (6-, 12- and 24-h) were perfectly well detected. Waves and swell are also likely to be detected. If the validity of our method is already well-established with L1 band [1], the aim of our current study is to analyze the results obtained with the other GNSS frequency band: L2 and L5. L1 band seems to provide the best sea level estimation, but the combination of SNR data from each frequency increases the number of observables and thus the quality of the final estimation. [1] N. Roussel, G. Ramillien, F. Frappart, J. Darrozes, A. Gay, R. Biancale, N. Striebig, V. Hanquiez, X. Bertin, D. Allain : "Sea level monitoring and sea state estimate using a single geodetic receiver", Remote Sensing of Environment 171 (2015) 261-277.

  20. Ionospheric irregularity characteristics from quasiperiodic structure in the radio wave scintillation

    NASA Astrophysics Data System (ADS)

    Chen, K. Y.; Su, S. Y.; Liu, C. H.; Basu, S.

    2005-06-01

    Quasiperiodic (QP) diffraction pattern in scintillation patches has been known to highly correlate with the edge structures of a plasma bubble (Franke et al., 1984). A new time-frequency analysis method of Hilbert-Huang transform (HHT) has been applied to analyze the scintillation data taken at Ascension Island to understand the characteristics of corresponding ionosphere irregularities. The HHT method enables us to extract the quasiperiodic diffraction signals embedded inside the scintillation data and to obtain the characteristics of such diffraction signals. The cross correlation of the two sets of diffraction signals received by two stations at each end of Ascension Island indicates that the density irregularity pattern that causes the diffraction pattern should have an eastward drift velocity of ˜130 m/s. The HHT analysis of the instantaneous frequency in the QP diffraction patterns also reveals some frequency shifts in their peak frequencies. For the QP diffraction pattern caused by the leading edge of the large density gradient at the east wall of a structured bubble, an ascending note in the peak frequency is observed, and for the trailing edge a descending note is observed. The linear change in the transient of the peak frequency in the QP diffraction pattern is consistent with the theory and the simulation result of Franke et al. Estimate of the slope in the transient frequency provides us the information that allows us to identify the locations of plasma walls, and the east-west scale of the irregularity can be estimated. In our case we obtain about 24 km in the east-west scale. Furthermore, the height location of density irregularities that cause the diffraction pattern is estimated to be between 310 and 330 km, that is, around the F peak during observation.

  1. Q estimation of seismic data using the generalized S-transform

    NASA Astrophysics Data System (ADS)

    Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming

    2016-12-01

    Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.

  2. Flood-frequency relations for urban streams in Georgia; 1994 update

    USGS Publications Warehouse

    Inman, Ernest J.

    1995-01-01

    A statewide study of flood magnitude and frequency in urban areas of Georgia was made to develop methods of estimating flood characteristics at ungaged urban sites. A knowledge of the magnitude and frequency of floods is needed for the design of highway drainage structures, establishing flood- insurance rates, and other uses by urban planners and engineers. A U.S. Geological Survey rainfall-runoff model was calibrated for 65 urban drainage basins ranging in size from 0.04 to 19.1 square miles in 10 urban areas of Georgia. Rainfall-runoff data were collected for a period of 5 to 7 years at each station beginning in 1973 in Metropolitan Atlanta and ending in 1993 in Thomasville, Ga. Calibrated models were used to synthesize long-term annual flood peak discharges for these basins from existing Long-term rainfall records. The 2- to 500-year flood-frequency estimates were developed for each basin by fitting a Pearson Type III frequency distribution curve to the logarithms of these annual peak discharges. Multiple-regression analyses were used to define relations between the station flood-frequency data and several physical basin characteristics, of which drainage area and total impervious area were the most statistically significant. Using theseregression equations and basin characteristics, the magnitude and frequency of floods at ungaged urban basins can be estimated throughout Georgia.

  3. Wigner-Hough/Radon Transform for GPS Post-Correlation Integration (Preprint)

    DTIC Science & Technology

    2007-09-01

    Wigner - Ville distribution ( WVD ) is a well known method to estimate instantaneous frequency, which appears as a...Barbarossa, 1996]. In this method, the Wigner - Ville distribution ( WVD ) is used to represent the signal energy in the time-frequency plane while the...its Wigner - Ville 4 distribution or WVD is computed as: ∫ +∞ ∞− −−+= τττ τπ detxtxftW fj 2* ) 2 () 2 (),( (4) where * stands for complex

  4. Population genetics inference for longitudinally-sampled mutants under strong selection.

    PubMed

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  5. The identification of multi-cave combinations in carbonate reservoirs based on sparsity constraint inverse spectral decomposition

    NASA Astrophysics Data System (ADS)

    Li, Qian; Di, Bangrang; Wei, Jianxin; Yuan, Sanyi; Si, Wenpeng

    2016-12-01

    Sparsity constraint inverse spectral decomposition (SCISD) is a time-frequency analysis method based on the convolution model, in which minimizing the l1 norm of the time-frequency spectrum of the seismic signal is adopted as a sparsity constraint term. The SCISD method has higher time-frequency resolution and more concentrated time-frequency distribution than the conventional spectral decomposition methods, such as short-time Fourier transformation (STFT), continuous-wavelet transform (CWT) and S-transform. Due to these good features, the SCISD method has gradually been used in low-frequency anomaly detection, horizon identification and random noise reduction for sandstone and shale reservoirs. However, it has not yet been used in carbonate reservoir prediction. The carbonate fractured-vuggy reservoir is the major hydrocarbon reservoir in the Halahatang area of the Tarim Basin, north-west China. If reasonable predictions for the type of multi-cave combinations are not made, it may lead to an incorrect explanation for seismic responses of the multi-cave combinations. Furthermore, it will result in large errors in reserves estimation of the carbonate reservoir. In this paper, the energy and phase spectra of the SCISD are applied to identify the multi-cave combinations in carbonate reservoirs. The examples of physical model data and real seismic data illustrate that the SCISD method can detect the combination types and the number of caves of multi-cave combinations and can provide a favourable basis for the subsequent reservoir prediction and quantitative estimation of the cave-type carbonate reservoir volume.

  6. Spread-Spectrum Carrier Estimation With Unknown Doppler Shift

    NASA Technical Reports Server (NTRS)

    DeLeon, Phillip L.; Scaife, Bradley J.

    1998-01-01

    We present a method for the frequency estimation of a BPSK modulated, spread-spectrum carrier with unknown Doppler shift. The approach relies on a classic periodogram in conjunction with a spectral matched filter. Simulation results indicate accurate carrier estimation with processing gains near 40. A DSP-based prototype has been implemented for real-time carrier estimation for use in New Mexico State University's proposal for NASA's Demand Assignment Multiple Access service.

  7. Variable is better than invariable: sparse VSS-NLMS algorithms with application to adaptive MIMO channel estimation.

    PubMed

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.

  8. Variable Is Better Than Invariable: Sparse VSS-NLMS Algorithms with Application to Adaptive MIMO Channel Estimation

    PubMed Central

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286

  9. A fast estimation of shock wave pressure based on trend identification

    NASA Astrophysics Data System (ADS)

    Yao, Zhenjian; Wang, Zhongyu; Wang, Chenchen; Lv, Jing

    2018-04-01

    In this paper, a fast method based on trend identification is proposed to accurately estimate the shock wave pressure in a dynamic measurement. Firstly, the collected output signal of the pressure sensor is reconstructed by discrete cosine transform (DCT) to reduce the computational complexity for the subsequent steps. Secondly, the empirical mode decomposition (EMD) is applied to decompose the reconstructed signal into several components with different frequency-bands, and the last few low-frequency components are chosen to recover the trend of the reconstructed signal. In the meantime, the optimal component number is determined based on the correlation coefficient and the normalized Euclidean distance between the trend and the reconstructed signal. Thirdly, with the areas under the gradient curve of the trend signal, the stable interval that produces the minimum can be easily identified. As a result, the stable value of the output signal is achieved in this interval. Finally, the shock wave pressure can be estimated according to the stable value of the output signal and the sensitivity of the sensor in the dynamic measurement. A series of shock wave pressure measurements are carried out with a shock tube system to validate the performance of this method. The experimental results show that the proposed method works well in shock wave pressure estimation. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing approaches in both estimation accuracy and computational efficiency.

  10. An approach to the interpretation of Cole-Davidson and Cole-Cole dielectric functions

    NASA Astrophysics Data System (ADS)

    Iglesias, T. P.; Vilão, G.; Reis, João Carlos R.

    2017-08-01

    Assuming that a dielectric sample can be described by Debye's model at each frequency, a method based on Cole's treatment is proposed for the direct estimation at experimental frequencies of relaxation times and the corresponding static and infinite-frequency permittivities. These quantities and the link between dielectric strength and mean molecular dipole moment at each frequency could be useful to analyze dielectric relaxation processes. The method is applied to samples that follow a Cole-Cole or a Cole-Davidson dielectric function. A physical interpretation of these dielectric functions is proposed. The behavior of relaxation time with frequency can be distinguished between the two dielectric functions. The proposed method can also be applied to samples following a Navriliak-Negami or any other dielectric function. The dielectric relaxation of a nanofluid consisting of graphene nanoparticles dispersed in the oil squalane is reported and discussed within the novel framework.

  11. A bootstrap method for estimating uncertainty of water quality trends

    USGS Publications Warehouse

    Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura

    2015-01-01

    Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.

  12. Comparaisons d'étalons primaires de fréquence par GPS.

    NASA Astrophysics Data System (ADS)

    Uhrich, P. J.-M.

    The new primary frequency standard of the BNM-LPTF, LPTF FO1, exhibits a frequency accuracy estimated at 3×10-15. For the comparison with other primary frequency standards, it then requires a method that remains at a stability level better than 10-15 between ten hours, during which it remains generally in continuous operation, and a couple of days, where the local oscillator towards which LPTF FO1 is estimated keeps its frequency at a level of 2×10-15. The well known GPS common-view method does not fit any more when using a single channel receiver: the clock comparison measurements exhibit a frequency stability at a few parts in 10-14 over one day, depending on the distance between the clock, and the intrinsic best stability level limited by the GPS signal currently used can be calculated at 7.7×10-15. But is can be shown that a 4 channel receiver, performing as many regular common-views as possible over each day, would allow to reach 10-15 on actual measurements. That should also be the case for an other option: the use of the carrier phase of the GPS signal, associated with global geodetic computing.

  13. Increasing sensitivity in the measurement of heart rate variability: the method of non-stationary RR time-frequency analysis.

    PubMed

    Melkonian, D; Korner, A; Meares, R; Bahramali, H

    2012-10-01

    A novel method of the time-frequency analysis of non-stationary heart rate variability (HRV) is developed which introduces the fragmentary spectrum as a measure that brings together the frequency content, timing and duration of HRV segments. The fragmentary spectrum is calculated by the similar basis function algorithm. This numerical tool of the time to frequency and frequency to time Fourier transformations accepts both uniform and non-uniform sampling intervals, and is applicable to signal segments of arbitrary length. Once the fragmentary spectrum is calculated, the inverse transform recovers the original signal and reveals accuracy of spectral estimates. Numerical experiments show that discontinuities at the boundaries of the succession of inter-beat intervals can cause unacceptable distortions of the spectral estimates. We have developed a measure that we call the "RR deltagram" as a form of the HRV data that minimises spectral errors. The analysis of the experimental HRV data from real-life and controlled breathing conditions suggests transient oscillatory components as functionally meaningful elements of highly complex and irregular patterns of HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  15. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  16. Salicylate-induced changes in auditory thresholds of adolescent and adult rats.

    PubMed

    Brennan, J F; Brown, C A; Jastreboff, P J

    1996-01-01

    Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.

  17. Continuous estimates on the earthquake early warning magnitude by use of the near-field acceleration records

    NASA Astrophysics Data System (ADS)

    Li, Jun; Jin, Xing; Wei, Yongxiang; Zhang, Hongcai

    2013-10-01

    In this article, the seismic records of Japan's Kik-net are selected to measure the acceleration, displacement, and effective peak acceleration of each seismic record within a certain time after P wave, then a continuous estimation is given on earthquake early warning magnitude through statistical analysis method, and Wenchuan earthquake record is utilized to check the method. The results show that the reliability of earthquake early warning magnitude continuously increases with the increase of the seismic information, the biggest residual happens if the acceleration is adopted to fit earthquake magnitude, which may be caused by rich high-frequency components and large dispersion of peak value in acceleration record, the influence caused by the high-frequency components can be effectively reduced if the effective peak acceleration and peak displacement is adopted, it is estimated that the dispersion of earthquake magnitude obviously reduces, but it is easy for peak displacement to be affected by long-period drifting. In various components, the residual enlargement phenomenon at vertical direction is almost unobvious, thus it is recommended in this article that the effective peak acceleration at vertical direction is preferred to estimate earthquake early warning magnitude. Through adopting Wenchuan strong earthquake record to check the method mentioned in this article, it is found that this method can be used to quickly, stably, and accurately estimate the early warning magnitude of this earthquake, which shows that this method is completely applicable for earthquake early warning.

  18. Calculating weighted estimates of peak streamflow statistics

    USGS Publications Warehouse

    Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.

    2012-01-01

    According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.

  19. Moving target parameter estimation of SAR after two looks cancellation

    NASA Astrophysics Data System (ADS)

    Gan, Rongbing; Wang, Jianguo; Gao, Xiang

    2005-11-01

    Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.

  20. Frequency Domain Analysis of Sensor Data for Event Classification in Real-Time Robot Assisted Deburring

    PubMed Central

    Pappachan, Bobby K; Caesarendra, Wahyu; Tjahjowidodo, Tegoeh; Wijaya, Tomi

    2017-01-01

    Process monitoring using indirect methods relies on the usage of sensors. Using sensors to acquire vital process related information also presents itself with the problem of big data management and analysis. Due to uncertainty in the frequency of events occurring, a higher sampling rate is often used in real-time monitoring applications to increase the chances of capturing and understanding all possible events related to the process. Advanced signal processing methods are used to further decipher meaningful information from the acquired data. In this research work, power spectrum density (PSD) of sensor data acquired at sampling rates between 40–51.2 kHz was calculated and the corelation between PSD and completed number of cycles/passes is presented. Here, the progress in number of cycles/passes is the event this research work intends to classify and the algorithm used to compute PSD is Welch’s estimate method. A comparison between Welch’s estimate method and statistical methods is also discussed. A clear co-relation was observed using Welch’s estimate to classify the number of cycles/passes. The paper also succeeds in classifying vibration signal generated by the spindle from the vibration signal acquired during finishing process. PMID:28556809

  1. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  2. Estimating Tree Height-Diameter Models with the Bayesian Method

    PubMed Central

    Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  3. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  4. Radar Imaging Using The Wigner-Ville Distribution

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; Kenny, Owen P.; Whitehouse, Harper J.

    1989-12-01

    The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. This paper first discusses the radar equation in terms of the time-frequency representation of the signal received from a radar system. It then presents a method of tomographic reconstruction for time-frequency images to estimate the scattering function of the aircraft. An optical archi-tecture is then discussed for the real-time implementation of the analysis method based on the WVD.

  5. Estimating Acute Viral Hepatitis Infections From Nationally Reported Cases

    PubMed Central

    Liu, Stephen; Roberts, Henry; Jiles, Ruth B.; Holmberg, Scott D.

    2014-01-01

    Objectives. Because only a fraction of patients with acute viral hepatitis A, B, and C are reported through national surveillance to the Centers for Disease Control and Prevention, we estimated the true numbers. Methods. We applied a simple probabilistic model to estimate the fraction of patients with acute hepatitis A, hepatitis B, and hepatitis C who would have been symptomatic, would have sought health care tests, and would have been reported to health officials in 2011. Results. For hepatitis A, the frequencies of symptoms (85%), care seeking (88%), and reporting (69%) yielded an estimate of 2730 infections (2.0 infections per reported case). For hepatitis B, the frequencies of symptoms (39%), care seeking (88%), and reporting (45%) indicated 18 730 infections (6.5 infections per reported case). For hepatitis C, the frequency of symptoms among injection drug users (13%) and those infected otherwise (48%), proportion seeking care (88%), and percentage reported (53%) indicated 17 100 infections (12.3 infections per reported case). Conclusions. These adjustment factors will allow state and local health authorities to estimate acute hepatitis infections locally and plan prevention activities accordingly. PMID:24432918

  6. Viscoelastic properties of soft gels: comparison of magnetic resonance elastography and dynamic shear testing in the shear wave regime

    NASA Astrophysics Data System (ADS)

    Okamoto, R. J.; Clayton, E. H.; Bayly, P. V.

    2011-10-01

    Magnetic resonance elastography (MRE) is used to quantify the viscoelastic shear modulus, G*, of human and animal tissues. Previously, values of G* determined by MRE have been compared to values from mechanical tests performed at lower frequencies. In this study, a novel dynamic shear test (DST) was used to measure G* of a tissue-mimicking material at higher frequencies for direct comparison to MRE. A closed-form solution, including inertial effects, was used to extract G* values from DST data obtained between 20 and 200 Hz. MRE was performed using cylindrical 'phantoms' of the same material in an overlapping frequency range of 100-400 Hz. Axial vibrations of a central rod caused radially propagating shear waves in the phantom. Displacement fields were fit to a viscoelastic form of Navier's equation using a total least-squares approach to obtain local estimates of G*. DST estimates of the storage G' (Re[G*]) and loss modulus G'' (Im[G*]) for the tissue-mimicking material increased with frequency from 0.86 to 0.97 kPa (20-200 Hz, n = 16), while MRE estimates of G' increased from 1.06 to 1.15 kPa (100-400 Hz, n = 6). The loss factor (Im[G*]/Re[G*]) also increased with frequency for both test methods: 0.06-0.14 (20-200 Hz, DST) and 0.11-0.23 (100-400 Hz, MRE). Close agreement between MRE and DST results at overlapping frequencies indicates that G* can be locally estimated with MRE over a wide frequency range. Low signal-to-noise ratio, long shear wavelengths and boundary effects were found to increase residual fitting error, reinforcing the use of an error metric to assess confidence in local parameter estimates obtained by MRE.

  7. Viscoelastic properties of soft gels: comparison of magnetic resonance elastography and dynamic shear testing in the shear wave regime.

    PubMed

    Okamoto, R J; Clayton, E H; Bayly, P V

    2011-10-07

    Magnetic resonance elastography (MRE) is used to quantify the viscoelastic shear modulus, G*, of human and animal tissues. Previously, values of G* determined by MRE have been compared to values from mechanical tests performed at lower frequencies. In this study, a novel dynamic shear test (DST) was used to measure G* of a tissue-mimicking material at higher frequencies for direct comparison to MRE. A closed-form solution, including inertial effects, was used to extract G* values from DST data obtained between 20 and 200 Hz. MRE was performed using cylindrical 'phantoms' of the same material in an overlapping frequency range of 100-400 Hz. Axial vibrations of a central rod caused radially propagating shear waves in the phantom. Displacement fields were fit to a viscoelastic form of Navier's equation using a total least-squares approach to obtain local estimates of G*. DST estimates of the storage G' (Re[G*]) and loss modulus G″ (Im[G*]) for the tissue-mimicking material increased with frequency from 0.86 to 0.97 kPa (20-200 Hz, n = 16), while MRE estimates of G' increased from 1.06 to 1.15 kPa (100-400 Hz, n = 6). The loss factor (Im[G*]/Re[G*]) also increased with frequency for both test methods: 0.06-0.14 (20-200 Hz, DST) and 0.11-0.23 (100-400 Hz, MRE). Close agreement between MRE and DST results at overlapping frequencies indicates that G* can be locally estimated with MRE over a wide frequency range. Low signal-to-noise ratio, long shear wavelengths and boundary effects were found to increase residual fitting error, reinforcing the use of an error metric to assess confidence in local parameter estimates obtained by MRE.

  8. Estimating magnitude and frequency of floods using the PeakFQ 7.0 program

    USGS Publications Warehouse

    Veilleux, Andrea G.; Cohn, Timothy A.; Flynn, Kathleen M.; Mason, Jr., Robert R.; Hummel, Paul R.

    2014-01-01

    Flood-frequency analysis provides information about the magnitude and frequency of flood discharges based on records of annual maximum instantaneous peak discharges collected at streamgages. The information is essential for defining flood-hazard areas, for managing floodplains, and for designing bridges, culverts, dams, levees, and other flood-control structures. Bulletin 17B (B17B) of the Interagency Advisory Committee on Water Data (IACWD; 1982) codifies the standard methodology for conducting flood-frequency studies in the United States. B17B specifies that annual peak-flow data are to be fit to a log-Pearson Type III distribution. Specific methods are also prescribed for improving skew estimates using regional skew information, tests for high and low outliers, adjustments for low outliers and zero flows, and procedures for incorporating historical flood information. The authors of B17B identified various needs for methodological improvement and recommended additional study. In response to these needs, the Advisory Committee on Water Information (ACWI, successor to IACWD; http://acwi.gov/, Subcommittee on Hydrology (SOH), Hydrologic Frequency Analysis Work Group (HFAWG), has recommended modest changes to B17B. These changes include adoption of a generalized method-of-moments estimator denoted the Expected Moments Algorithm (EMA) (Cohn and others, 1997) and a generalized version of the Grubbs-Beck test for low outliers (Cohn and others, 2013). The SOH requested that the USGS implement these changes in a user-friendly, publicly accessible program.

  9. A Robust Wrap Reduction Algorithm for Fringe Projection Profilometry and Applications in Magnetic Resonance Imaging.

    PubMed

    Arevalillo-Herraez, Miguel; Cobos, Maximo; Garcia-Pineda, Miguel

    2017-03-01

    In this paper, we present an effective algorithm to reduce the number of wraps in a 2D phase signal provided as input. The technique is based on an accurate estimate of the fundamental frequency of a 2D complex signal with the phase given by the input, and the removal of a dependent additive term from the phase map. Unlike existing methods based on the discrete Fourier transform (DFT), the frequency is computed by using noise-robust estimates that are not restricted to integer values. Then, to deal with the problem of a non-integer shift in the frequency domain, an equivalent operation is carried out on the original phase signal. This consists of the subtraction of a tilted plane whose slope is computed from the frequency, followed by a re-wrapping operation. The technique has been exhaustively tested on fringe projection profilometry (FPP) and magnetic resonance imaging (MRI) signals. In addition, the performance of several frequency estimation methods has been compared. The proposed methodology is particularly effective on FPP signals, showing a higher performance than the state-of-the-art wrap reduction approaches. In this context, it contributes to canceling the carrier effect at the same time as it eliminates any potential slope that affects the entire signal. Its effectiveness on other carrier-free phase signals, e.g., MRI, is limited to the case that inherent slopes are present in the phase data.

  10. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Qing; Wang, Jiang; Yu, Haitao

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less

  11. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    NASA Astrophysics Data System (ADS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-06-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  12. A comparison of Q-factor estimation methods for marine seismic data

    NASA Astrophysics Data System (ADS)

    Kwon, J.; Ha, J.; Shin, S.; Chung, W.; Lim, C.; Lee, D.

    2016-12-01

    The seismic imaging technique draws information from inside the earth using seismic reflection and transmission data. This technique is an important method in geophysical exploration. Also, it has been employed widely as a means of locating oil and gas reservoirs because it offers information on geological media. There is much recent and active research into seismic attenuation and how it determines the quality of seismic imaging. Seismic attenuation is determined by various geological characteristics, through the absorption or scattering that occurs when the seismic wave passes through a geological medium. The seismic attenuation can be defined using an attenuation coefficient and represented as a non-dimensional variable known as the Q-factor. Q-factor is a unique characteristic of a geological medium. It is a very important material property for oil and gas resource development. Q-factor can be used to infer other characteristics of a medium, such as porosity, permeability and viscosity, and can directly indicate the presence of hydrocarbons to identify oil and gas bearing areas from the seismic data. There are various ways to estimate Q-factor in three different domains. In the time domain, pulse amplitude decay, pulse rising time, and pulse broadening are representative. Logarithm spectral ratio (LSR), centroid frequency shift (CFS), and peak frequency shift (PFS) are used in the frequency domain. In the time-frequency domain, Wavelet's Envelope Peak Instantaneous Frequency (WEPIF) is most frequently employed. In this study, we estimated and analyzed the Q-factor through the numerical model test and used 4 methods: the LSR, CFS, PFS, and WEPIF. Before we applied these 4 methods to observed data, we experimented with the numerical model test. The numerical model test data is derived from Norsar-2D, which is the basis of the ray-tracing algorithm, and we used reflection and normal incidence surveys to calculate Q-factor according to the array of sources and receivers. After the numerical model test, we chose the most accurate of the 4 methods by comparing Q-factor through reflection and normal incidence surveys. We applied the method to the observed data and proved its accuracy.

  13. Rock physics model-based prediction of shear wave velocity in the Barnett Shale formation

    NASA Astrophysics Data System (ADS)

    Guo, Zhiqi; Li, Xiang-Yang

    2015-06-01

    Predicting S-wave velocity is important for reservoir characterization and fluid identification in unconventional resources. A rock physics model-based method is developed for estimating pore aspect ratio and predicting shear wave velocity Vs from the information of P-wave velocity, porosity and mineralogy in a borehole. Statistical distribution of pore geometry is considered in the rock physics models. In the application to the Barnett formation, we compare the high frequency self-consistent approximation (SCA) method that corresponds to isolated pore spaces, and the low frequency SCA-Gassmann method that describes well-connected pore spaces. Inversion results indicate that compared to the surroundings, the Barnett Shale shows less fluctuation in the pore aspect ratio in spite of complex constituents in the shale. The high frequency method provides a more robust and accurate prediction of Vs for all the three intervals in the Barnett formation, while the low frequency method collapses for the Barnett Shale interval. Possible causes for this discrepancy can be explained by the fact that poor in situ pore connectivity and low permeability make well-log sonic frequencies act as high frequencies and thus invalidate the low frequency assumption of the Gassmann theory. In comparison, for the overlying Marble Falls and underlying Ellenburger carbonates, both the high and low frequency methods predict Vs with reasonable accuracy, which may reveal that sonic frequencies are within the transition frequencies zone due to higher pore connectivity in the surroundings.

  14. Do We Really Need Sinusoidal Surface Temperatures to Apply Heat Tracing Techniques to Estimate Streambed Fluid Fluxes?

    NASA Astrophysics Data System (ADS)

    Luce, C. H.; Tonina, D.; Applebee, R.; DeWeese, T.

    2017-12-01

    Two common refrains about using the one-dimensional advection diffusion equation to estimate fluid fluxes, thermal conductivity, or bed surface elevation from temperature time series in streambeds are that the solution assumes that 1) the surface boundary condition is a sine wave or nearly so, and 2) there is no gradient in mean temperature with depth. Concerns on these subjects are phrased in various ways, including non-stationarity in frequency, amplitude, or phase. Although the mathematical posing of the original solution to the problem might lead one to believe these constraints exist, the perception that they are a source of error is a fallacy. Here we re-derive the inverse solution of the 1-D advection-diffusion equation starting with an arbitrary surface boundary condition for temperature. In doing so, we demonstrate the frequency-independence of the solution, meaning any single frequency can be used in the frequency-domain solutions to estimate thermal diffusivity and 1-D fluid flux in streambeds, even if the forcing has multiple frequencies. This means that diurnal variations with asymmetric shapes, gradients in the mean temperature with depth, or `non-stationary' amplitude and frequency (or phase) do not actually represent violations of assumptions, and they should not cause errors in estimates when using one of the suite of existing solution methods derived based on a single frequency. Misattribution of errors to these issues constrains progress on solving real sources of error. Numerical and physical experiments are used to verify this conclusion and consider the utility of information at `non-standard' frequencies and multiple frequencies to augment the information derived from time series of temperature.

  15. Flood-frequency characteristics of Wisconsin streams

    USGS Publications Warehouse

    Walker, John F.; Peppler, Marie C.; Danz, Mari E.; Hubbard, Laura E.

    2017-05-22

    Flood-frequency characteristics for 360 gaged sites on unregulated rural streams in Wisconsin are presented for percent annual exceedance probabilities ranging from 0.2 to 50 using a statewide skewness map developed for this report. Equations of the relations between flood-frequency and drainage-basin characteristics were developed by multiple-regression analyses. Flood-frequency characteristics for ungaged sites on unregulated, rural streams can be estimated by use of the equations presented in this report. The State was divided into eight areas of similar physiographic characteristics. The most significant basin characteristics are drainage area, soil saturated hydraulic conductivity, main-channel slope, and several land-use variables. The standard error of prediction for the equation for the 1-percent annual exceedance probability flood ranges from 56 to 70 percent for Wisconsin Streams; these values are larger than results presented in previous reports. The increase in the standard error of prediction is likely due to increased variability of the annual-peak discharges, resulting in increased variability in the magnitude of flood peaks at higher frequencies. For each of the unregulated rural streamflow-gaging stations, a weighted estimate based on the at-site log Pearson type III analysis and the multiple regression results was determined. The weighted estimate generally has a lower uncertainty than either the Log Pearson type III or multiple regression estimates. For regulated streams, a graphical method for estimating flood-frequency characteristics was developed from the relations of discharge and drainage area for selected annual exceedance probabilities. Graphs for the major regulated streams in Wisconsin are presented in the report.

  16. Numerical approach for ECT by using boundary element method with Laplace transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enokizono, M.; Todaka, T.; Shibao, K.

    1997-03-01

    This paper presents an inverse analysis by using BEM with Laplace transform. The method is applied to a simple problem in the eddy current testing (ECT). Some crack shapes in a conductive specimen are estimated from distributions of the transient eddy current on its sensing surface and magnetic flux density in the liftoff space. Because the transient behavior includes information on various frequency components, the method is applicable to the shape estimation of a comparative small crack.

  17. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoke, Anderson; Shirazi, Mariko; Chakraborty, Sudipta

    As deployment of power electronic coupled generation such as photovoltaic (PV) systems increases, grid operators have shown increasing interest in calling on inverter-coupled generation to help mitigate frequency contingency events by rapidly surging active power into the grid. When responding to contingency events, the faster the active power is provided, the more effective it may be for arresting the frequency event. This paper proposes a predictive PV inverter control method for very fast and accurate control of active power. This rapid active power control method will increase the effectiveness of various higher-level controls designed to mitigate grid frequency contingency events,more » including fast power-frequency droop, inertia emulation, and fast frequency response, without the need for energy storage. The rapid active power control method, coupled with a maximum power point estimation method, is implemented in a prototype PV inverter connected to a PV array. The prototype inverter's response to various frequency events is experimentally confirmed to be fast (beginning within 2 line cycles and completing within 4.5 line cycles of a severe test event) and accurate (below 2% steady-state error).« less

  19. Automated segmentation of linear time-frequency representations of marine-mammal sounds.

    PubMed

    Dadouchi, Florian; Gervaise, Cedric; Ioana, Cornel; Huillery, Julien; Mars, Jérôme I

    2013-09-01

    Many marine mammals produce highly nonlinear frequency modulations. Determining the time-frequency support of these sounds offers various applications, which include recognition, localization, and density estimation. This study introduces a low parameterized automated spectrogram segmentation method that is based on a theoretical probabilistic framework. In the first step, the background noise in the spectrogram is fitted with a Chi-squared distribution and thresholded using a Neyman-Pearson approach. In the second step, the number of false detections in time-frequency regions is modeled as a binomial distribution, and then through a Neyman-Pearson strategy, the time-frequency bins are gathered into regions of interest. The proposed method is validated on real data of large sequences of whistles from common dolphins, collected in the Bay of Biscay (France). The proposed method is also compared with two alternative approaches: the first is smoothing and thresholding of the spectrogram; the second is thresholding of the spectrogram followed by the use of morphological operators to gather the time-frequency bins and to remove false positives. This method is shown to increase the probability of detection for the same probability of false alarms.

  20. Two-station comparison of peak flows to improve flood-frequency estimates for seven streamflow-gaging stations in the Salmon and Clearwater River Basins, Central Idaho

    USGS Publications Warehouse

    Berenbrock, Charles

    2003-01-01

    Improved flood-frequency estimates for short-term (10 or fewer years of record) streamflow-gaging stations were needed to support instream flow studies by the U.S. Forest Service, which are focused on quantifying water rights necessary to maintain or restore productive fish habitat. Because peak-flow data for short-term gaging stations can be biased by having been collected during an unusually wet, dry, or otherwise unrepresentative period of record, the data may not represent the full range of potential floods at a site. To test whether peak-flow estimates for short-term gaging stations could be improved, the two-station comparison method was used to adjust the logarithmic mean and logarithmic standard deviation of peak flows for seven short-term gaging stations in the Salmon and Clearwater River Basins, central Idaho. Correlation coefficients determined from regression of peak flows for paired short-term and long-term (more than 10 years of record) gaging stations over a concurrent period of record indicated that the mean and standard deviation of peak flows for all short-term gaging stations would be improved. Flood-frequency estimates for seven short-term gaging stations were determined using the adjusted mean and standard deviation. The original (unadjusted) flood-frequency estimates for three of the seven short-term gaging stations differed from the adjusted estimates by less than 10 percent, probably because the data were collected during periods representing the full range of peak flows. Unadjusted flood-frequency estimates for four short-term gaging stations differed from the adjusted estimates by more than 10 percent; unadjusted estimates for Little Slate Creek and Salmon River near Obsidian differed from adjusted estimates by nearly 30 percent. These large differences probably are attributable to unrepresentative periods of peak-flow data collection.

  1. Spectral Investigation of Large and Medium Scale Traveling Ionospheric Disturbances using GPS Slant Total Electron Content

    NASA Astrophysics Data System (ADS)

    Yarici, Aysenur; Arikan, Feza; Efendi, Emre

    2016-07-01

    Global Positioning System (GPS) provides opportunity to study the ionospheric variability as the navigation and positions signals transverse ionosphere on their path to ground based dual frequency receivers. Slant Total Electron Content (STEC) is defined as the line integral of electron density along a ray path that connect GPS receiver to satellite. Due to the inhomogeneous, anisotropic, temporally and spatially varying nature of ionosphere, GPS signals that are passing through the ionosphere are affected and this situation can be observed as disturbance on STEC data. Traveling Ionospheric Disturbances (TIDs) are irregularities of the ionosphere expressed as wave-like oscillations decrescent slowly through time. TIDs are classified into two types according to their wave parameters such as velocity, period and wavelength as large and medium scale. In this study, a new method, namely Ionospheric-Fast Fourier Transform (I-FFT), is developed to investigate the spectral properties of TIDs. I-FFT is applied to STEC data after the TID is detected using the Differential Rate of TEC (DRoT) method developed by IONOLAB group. The performance of the developed I-FFT method is evaluated over the synthetic data to obtain the bounds on the estimation error. It is observed that I-FFT method can estimate the frequency and duration of TIDs with 80% or more accuracy. In the application of I-FFT to various GPS-STEC data from stations located at high, equatorial and mid-latitude regions for detection of frequency and duration of TIDs due to geomagnetic storms and seismic activities, it is observed that TIDs with frequencies between 0.6 mHz to 2.4 mHz and durations longer than 10 minutes; and TIDs with frequencies between 0.15 mHz to 0.6 mHz and durations longer than 75 minutes can be estimated automatically with more than 80% accuracy. This study is supported by TUBITAK EEEAG 115E915 project.

  2. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  3. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  4. Sensor fusion for structural tilt estimation using an acceleration-based tilt sensor and a gyroscope

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Park, Jong-Woong; Spencer, B. F., Jr.; Moon, Do-Soo; Fan, Jiansheng

    2017-10-01

    A tilt sensor can provide useful information regarding the health of structural systems. Most existing tilt sensors are gravity/acceleration based and can provide accurate measurements of static responses. However, for dynamic tilt, acceleration can dramatically affect the measured responses due to crosstalk. Thus, dynamic tilt measurement is still a challenging problem. One option is to integrate the output of a gyroscope sensor, which measures the angular velocity, to obtain the tilt; however, problems arise because the low-frequency sensitivity of the gyroscope is poor. This paper proposes a new approach to dynamic tilt measurements, fusing together information from a MEMS-based gyroscope and an acceleration-based tilt sensor. The gyroscope provides good estimates of the tilt at higher frequencies, whereas the acceleration measurements are used to estimate the tilt at lower frequencies. The Tikhonov regularization approach is employed to fuse these measurements together and overcome the ill-posed nature of the problem. The solution is carried out in the frequency domain and then implemented in the time domain using FIR filters to ensure stability. The proposed method is validated numerically and experimentally to show that it performs well in estimating both the pseudo-static and dynamic tilt measurements.

  5. Precipitation-Frequency and Discharge-Frequency Relations for Basins Less than 32 Square Miles in Kansas

    USGS Publications Warehouse

    Perry, Charles A.

    2008-01-01

    Precipitation-frequency and discharge-frequency relations for small drainage basins with areas less than 32 square miles in Kansas were evaluated to reduce the uncertainty of discharge-frequency estimates. Gaged-discharge records were used to develop discharge-frequency equations for the ratio of discharge to drainage area (Q/A) values using data from basins with variable soil permeability, channel slope, and mean annual precipitation. Soil permeability and mean annual precipitation are the dominant basin characteristics in the multiple linear regression analyses. In addition, 28 discharge measurements at ungaged sites by indirect surveying methods and by velocity meters also were used in this analysis to relate precipitation-recurrence interval to discharge-recurrence interval. Precipitation-recurrence interval for each of these discharge measurements were estimated from weather-radar estimates of precipitation and from nearby raingages. Time of concentration for each basin for each of the ungaged sites was computed and used to determine the precipitation-recurrence interval based on precipitation depth and duration. The ratio of discharge/drainage area (Q/A) value for each event was then assigned to that precipitation-recurrence interval. The relation between the ratio of discharge/drainage area (Q/A) and precipitation-recurrence interval for all 28 measured events resulted in a correlation coefficient of 0.79. Using basins less than 5.4 mi2 only, the correlation decreases to 0.74. However, when basins greater than 5.4 and less than 32 mi2 are examined the relation improves to a correlation coefficient of 0.95. There were a sufficient number of discharge and radar-measured precipitation events for both the 5-year (8 events) and the 100-year (11 events) recurrence intervals to examine the effect of basin characteristics on the Q/A values for basins less than 32 mi2. At the 5-year precipitation-/discharge-recurrence interval, channel slope was a significant predictor (r=0.99) of Q/A. Permeability (r=0.68) also had a significant effect on Q/A values for the 5-year recurrence interval. At the 100-year recurrence interval, permeability, channel slope, and mean annual precipitation did not have a significant effect on Q/A; however, time of concentration was a significant factor in determining Q/A for the 100-year events with greater times of concentration resulting in lower Q/A values. Additional high-recurrence interval (5-, 10-, 25-, 50-, and 100-year) precipitation/discharge data are needed to confirm these relations suggested above. Discharge data with attendant basin-wide precipitation data from precipitation-radar estimates provides a unique opportunity to study the effects of basin characteristics on the relation between precipitation recurrence interval and discharge-recurrence interval. Discharge-frequency values from the Q/A equations, the rational method, and the Kansas discharge-frequency equations (KFFE) were compared to 28 measured weather-radar precipitation-/discharge-frequency values. The association between precipitation frequency from weather-radar estimates and the frequency of the resulting discharge was shown in these comparisons. The measured and Q/A equation computed discharges displayed the best equality from low to high discharges of the three methods. Here the slope of the line was nearly 1:1 (y=0.9844x0.9677). Comparisons with the rational method produced a slope greater than 1:1 (y=0.0722x1.235), and the KFFE equations produced a slope less than 1:1 (y=5.9103x0.7475). The Q/A equation standard error of prediction averaged 0.1346 log units for the 5.4-to 32-square-mile group and 0.0944 log units for the less than 5.4-square mile group. The KFFE standard error averaged 0.2107 log units for the less-than-30-square-mile equations. Using the Q/A equations for determining discharge frequency values for ungaged sites thus appears to be a good alternative to the other two methods because of this s

  6. Features of HF Radio Wave Attenuation in the Midlatitude Ionosphere Near the Skip Zone Boundary

    NASA Astrophysics Data System (ADS)

    Denisenko, P. F.; Skazik, A. I.

    2017-06-01

    We briefly describe the history of studying the decameter radio wave attenuation by different methods in the midlatitude ionosphere. A new method of estimating the attenuation of HF radio waves in the ionospheric F region near the skip zone boundary is presented. This method is based on an analysis of the time structure of the interference field generated by highly stable monochromatic X-mode radio waves at the observation point. The main parameter is the effective electron collision frequency νeff, which allows for all energy losses in the form of equivalent heat loss. The frequency νeff is estimated by matching the assumed (model) and the experimentally observed structures. Model calculations are performed using the geometrical-optics approximation. The spatial attenuation caused by the influence of the medium-scale traveling ionospheric disturbances is taken into account. Spherical shape of the ionosphere and the Earth's magnetic field are roughly allowed for. The results of recording of the level of signals from the RWM (Moscow) station at a frequency of 9.996 MHz at point Rostov are used.

  7. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  8. Analysis of Synchronization Phenomena in Broadband Signals with Nonlinear Excitable Media

    NASA Astrophysics Data System (ADS)

    Chernihovskyi, Anton; Elger, Christian E.; Lehnertz, Klaus

    2009-12-01

    We apply the method of frequency-selective excitation waves in excitable media to characterize synchronization phenomena in interacting complex dynamical systems by measuring coincidence rates of induced excitations. We relax the frequency-selectivity of excitable media and demonstrate two applications of the method to signals with broadband spectra. Findings obtained from analyzing time series of coupled chaotic oscillators as well as electroencephalographic (EEG) recordings from an epilepsy patient indicate that this method can provide an alternative and complementary way to estimate the degree of phase synchronization in noisy signals.

  9. Measuring Multi-Joint Stiffness during Single Movements: Numerical Validation of a Novel Time-Frequency Approach

    PubMed Central

    Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R.

    2012-01-01

    This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases. PMID:22448233

  10. A novel method for estimating soybean herbivory in western corn rootworm (Coleoptera: Chrysomelidae).

    PubMed

    Seiter, Nicholas J; Richmond, Douglas S; Holland, Jeffrey D; Krupke, Christian H

    2010-08-01

    The western corn rootworm, Diabrotica virgifera virgifera LeConte (Coleoptera: Chrysomelidae), is the key pest of corn, Zea mays L., in North America. The western corn rootworm variant is a strain found in some parts of the United States that oviposits in soybean, Glycine max (L.) Merr., thereby circumventing crop rotation. Soybean herbivory is closely associated with oviposition; therefore, evidence of herbivory could serve as a proxy for rotation resistance. A digital image analysis method based on the characteristic green abdominal coloration of rootworm adults with soybean foliage in their guts was developed to estimate soybean herbivory rates of adult females. Image analysis software was used to develop and apply threshold limits that allowed only colors within the range that is characteristic of soybean herbivory to be displayed. When this method was applied to adult females swept from soybean fields in an area with high levels of rotation resistance, 54.3 +/- 2.1% were estimated to have fed on soybean. This is similar to a previously reported estimate of 54.8%. Results when laboratory-generated negative controls were analyzed showed an acceptably low frequency of false positives. This method could be developed into a management tool if user-friendly software were developed for its implementation. In addition, researchers may find the method useful as a rapid, standardized screen for measuring frequencies of soybean herbivory.

  11. Documentary evidence of past floods in Europe and their utility in flood frequency estimation

    NASA Astrophysics Data System (ADS)

    Kjeldsen, T. R.; Macdonald, N.; Lang, M.; Mediero, L.; Albuquerque, T.; Bogdanowicz, E.; Brázdil, R.; Castellarin, A.; David, V.; Fleig, A.; Gül, G. O.; Kriauciuniene, J.; Kohnová, S.; Merz, B.; Nicholson, O.; Roald, L. A.; Salinas, J. L.; Sarauskiene, D.; Šraj, M.; Strupczewski, W.; Szolgay, J.; Toumazis, A.; Vanneuville, W.; Veijalainen, N.; Wilson, D.

    2014-09-01

    This review outlines the use of documentary evidence of historical flood events in contemporary flood frequency estimation in European countries. The study shows that despite widespread consensus in the scientific literature on the utility of documentary evidence, the actual migration from academic to practical application has been limited. A detailed review of flood frequency estimation guidelines from different countries showed that the value of historical data is generally recognised, but practical methods for systematic and routine inclusion of this type of data into risk analysis are in most cases not available. Studies of historical events were identified in most countries, and good examples of national databases attempting to collate the available information were identified. The conclusion is that there is considerable potential for improving the reliability of the current flood risk assessments by harvesting the valuable information on past extreme events contained in the historical data sets.

  12. Methods for estimating magnitude and frequency of 1-, 3-, 7-, 15-, and 30-day flood-duration flows in Arizona

    USGS Publications Warehouse

    Kennedy, Jeffrey R.; Paretti, Nicholas V.; Veilleux, Andrea G.

    2014-01-01

    Regression equations, which allow predictions of n-day flood-duration flows for selected annual exceedance probabilities at ungaged sites, were developed using generalized least-squares regression and flood-duration flow frequency estimates at 56 streamgaging stations within a single, relatively uniform physiographic region in the central part of Arizona, between the Colorado Plateau and Basin and Range Province, called the Transition Zone. Drainage area explained most of the variation in the n-day flood-duration annual exceedance probabilities, but mean annual precipitation and mean elevation were also significant variables in the regression models. Standard error of prediction for the regression equations varies from 28 to 53 percent and generally decreases with increasing n-day duration. Outside the Transition Zone there are insufficient streamgaging stations to develop regression equations, but flood-duration flow frequency estimates are presented at select streamgaging stations.

  13. Near-source attenuation of high-frequency body waves beneath the New Madrid Seismic Zone

    NASA Astrophysics Data System (ADS)

    Pezeshk, Shahram; Sedaghati, Farhad; Nazemi, Nima

    2018-03-01

    Attenuation characteristics in the New Madrid Seismic Zone (NMSZ) are estimated from 157 local seismograph recordings out of 46 earthquakes of 2.6 ≤ M ≤ 4.1 with hypocentral distances up to 60 km and focal depths down to 25 km. Digital waveform seismograms were obtained from local earthquakes in the NMSZ recorded by the Center for Earthquake Research and Information (CERI) at the University of Memphis. Using the coda normalization method, we tried to determine Q values and geometrical spreading exponents at 13 center frequencies. The scatter of the data and trade-off between the geometrical spreading and the quality factor did not allow us to simultaneously derive both these parameters from inversion. Assuming 1/ R 1.0 as the geometrical spreading function in the NMSZ, the Q P and Q S estimates increase with increasing frequency from 354 and 426 at 4 Hz to 729 and 1091 at 24 Hz, respectively. Fitting a power law equation to the Q estimates, we found the attenuation models for the P waves and S waves in the frequency range of 4 to 24 Hz as Q P = (115.80 ± 1.36) f (0.495 ± 0.129) and Q S = (161.34 ± 1.73) f (0.613 ± 0.067), respectively. We did not consider Q estimates from the coda normalization method for frequencies less than 4 Hz in the regression analysis since the decay of coda amplitude was not observed at most bandpass filtered seismograms for these frequencies. Q S/ Q P > 1, for 4 ≤ f ≤ 24 Hz as well as strong intrinsic attenuation, suggest that the crust beneath the NMSZ is partially fluid-saturated. Further, high scattering attenuation indicates the presence of a high level of small-scale heterogeneities inside the crust in this region.

  14. 3D-Subspace-Based Auto-Paired Azimuth Angle, Elevation Angle, and Range Estimation for 24G FMCW Radar with an L-Shaped Array

    PubMed Central

    Nam, HyungSoo; Choi, ByungGil; Oh, Daegun

    2018-01-01

    In this paper, a three-dimensional (3D)-subspace-based azimuth angle, elevation angle, and range estimation method with auto-pairing is proposed for frequency-modulated continuous waveform (FMCW) radar with an L-shaped array. The proposed method is designed to exploit the 3D shift-invariant structure of the stacked Hankel snapshot matrix for auto-paired azimuth angle, elevation angle, and range estimation. The effectiveness of the proposed method is verified through a variety of experiments conducted in a chamber. For the realization of the proposed method, K-band FMCW radar is implemented with an L-shaped antenna. PMID:29621193

  15. Pseudorange error analysis for precise indoor positioning system

    NASA Astrophysics Data System (ADS)

    Pola, Marek; Bezoušek, Pavel

    2017-05-01

    There is a currently developed system of a transmitter indoor localization intended for fire fighters or members of rescue corps. In this system the transmitter of an ultra-wideband orthogonal frequency-division multiplexing signal position is determined by the time difference of arrival method. The position measurement accuracy highly depends on the directpath signal time of arrival estimation accuracy which is degraded by severe multipath in complicated environments such as buildings. The aim of this article is to assess errors in the direct-path signal time of arrival determination caused by multipath signal propagation and noise. Two methods of the direct-path signal time of arrival estimation are compared here: the cross correlation method and the spectral estimation method.

  16. Robust and intelligent bearing estimation

    DOEpatents

    Claassen, John P.

    2000-01-01

    A method of bearing estimation comprising quadrature digital filtering of event observations, constructing a plurality of observation matrices each centered on a time-frequency interval, determining for each observation matrix a parameter such as degree of polarization, linearity of particle motion, degree of dyadicy, or signal-to-noise ratio, choosing observation matrices most likely to produce a set of best available bearing estimates, and estimating a bearing for each observation matrix of the chosen set.

  17. Methods for estimating magnitude and frequency of floods in Arizona, developed with unregulated and rural peak-flow data through water year 2010

    USGS Publications Warehouse

    Paretti, Nicholas V.; Kennedy, Jeffrey R.; Turney, Lovina A.; Veilleux, Andrea G.

    2014-01-01

    The regional regression equations were integrated into the U.S. Geological Survey’s StreamStats program. The StreamStats program is a national map-based web application that allows the public to easily access published flood frequency and basin characteristic statistics. The interactive web application allows a user to select a point within a watershed (gaged or ungaged) and retrieve flood-frequency estimates derived from the current regional regression equations and geographic information system data within the selected basin. StreamStats provides users with an efficient and accurate means for retrieving the most up to date flood frequency and basin characteristic data. StreamStats is intended to provide consistent statistics, minimize user error, and reduce the need for large datasets and costly geographic information system software.

  18. Evaluation of the horizontal-to-vertical spectral ratio (HVSR) seismic method to determine sediment thickness in the vicinity of the South Well Field, Franklin County, OH

    USGS Publications Warehouse

    Haefner, Ralph J.; Sheets, Rodney A.; Andrews, Robert E.

    2011-01-01

    The horizontal-to-vertical spectral ratio (HVSR) seismic method involves analyzing measurements of ambient seismic noise in three dimensions to determine the fundamental site resonance frequency. Resonance is excited by the interaction of surface waves (Rayleigh and Love) and body waves (vertically incident shear) with the high-contrast aconstic impedance boundary at the bedrock-sediment interface. Measurements were made to determine the method's utility for estimating thickness of unconsolidated glacial sediments at 18 locations at the South Well Field, Franklin County, OH, and at six locations in Pickaway County where sediment thickness was already known. Measurements also were made near a high-capacity production well (with pumping on and off) and near a highway and a limestone quarry to examine changes in resonance frequencies over a 20-hour period. Although the regression relation for resonance frequency and sediment thickness had a relatively low [r.sup.2] (0.322), estimates of sediment thickness were, on average, within 14 percent of known thicknesses. Resonance frequencies for pumping on and pumping off were identical, although the amplitude of the peak was nearly double under pumping conditions. Resonance frequency for the 20-hour period did not change, but the amplitude of the peak changed considerably, with a maximum amplitude in the early afternoon and minimum in the very early morning hours. Clay layers within unconsolidated sediments may influence resonance frequency and the resulting regression equation, resulting in underestimation of sediment thickness; however, despite this and other complicating factors, hydrogeologists should consider this method when thickness data are needed for unconsolidated sediments.

  19. Evaluation of the horizontal-to-vertical spectral ratio (HVSR) seismic method to determine sediment thickness in the vicinity of the south well field, Franklin county, OH

    USGS Publications Warehouse

    Haefner, R.J.; Sheets, R.A.; Andrews, R.E.

    2010-01-01

    The horizontal-to-vertical spectral ratio (HVSR) seismic method involves analyzing measurements of ambient seismic noise in three dimensions to determine the fundamental site resonance frequency. Resonance is excited by the interaction of surface waves (Rayleigh and Love) and body waves (vertically incident shear) with the high-contrast acoustic impedance boundary at the bedrock-sediment interface. Measurements were made to determine the method's utility for estimating thickness of unconsolidated glacial sediments at 18 locations at the South Well Field, Franklin County, OH, and at six locations in Pickaway County where sediment thickness was already known. Measurements also were made near a high-capacity production well (with pumping on and off ) and near a highway and a limestone quarry to examine changes in resonance frequencies over a 20-hour period. Although the regression relation for resonance frequency and sediment thickness had a relatively low r 2(0.322), estimates of sediment thickness were, on average, within 14 percent of known thicknesses. Resonance frequencies for pumping on and pumping off were identical, although the amplitude of the peak was nearly double under pumping conditions. Resonance frequency for the 20-hour period did not change, but the amplitude of the peak changed considerably, with a maximum amplitude in the early afternoon and minimum in the very early morning hours. Clay layers within unconsolidated sediments may influence resonance frequency and the resulting regression equation, resulting in underestimation of sediment thickness; however, despite this and other complicating factors, hydrogeologists should consider this method when thickness data are needed for unconsolidated sediments. ?? 2011 by The Ohio Academy of Science. All Rights Reserved.

  20. Stochastic Gabor reflectivity and acoustic impedance inversion

    NASA Astrophysics Data System (ADS)

    Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John

    2018-02-01

    To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also, obtaining bias could help the method to estimate reliable AI. To justify the effect of random noise on deterministic and stochastic inversion results, a stationary noisy trace with signal-to-noise ratio equal to 2 was used. The results highlight the inability of deterministic inversion in dealing with a noisy data set even using a high number of regularization parameters. Also, despite the low level of signal, stochastic Gabor inversion not only can estimate correctly the wavelet’s properties but also, because of bias from well logs, the inversion result is very close to the real AI. Comparing deterministic and introduced inversion results on a real data set shows that low resolution results, especially in the deeper parts of seismic sections using deterministic inversion, creates significant reliability problems for seismic prospects, but this pitfall is solved completely using stochastic Gabor inversion. The estimated AI using Gabor inversion in the time domain is much better and faster than general Gabor inversion in the frequency domain. This is due to the extra number of windows required to analyze the time-frequency information and also the amount of temporal increment between windows. In contrast, stochastic Gabor inversion can estimate trustable physical properties close to the real characteristics. Applying to a real data set could give an ability to detect the direction of volcanic intrusion and the ability of lithology distribution delineation along the fan. Comparing the inversion results highlights the efficiency of stochastic Gabor inversion to delineate lateral lithology changes because of the improved frequency content and zero phasing of the final inversion volume.

  1. Regression equations for estimation of annual peak-streamflow frequency for undeveloped watersheds in Texas using an L-moment-based, PRESS-minimized, residual-adjusted approach

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2009-01-01

    Annual peak-streamflow frequency estimates are needed for flood-plain management; for objective assessment of flood risk; for cost-effective design of dams, levees, and other flood-control structures; and for design of roads, bridges, and culverts. Annual peak-streamflow frequency represents the peak streamflow for nine recurrence intervals of 2, 5, 10, 25, 50, 100, 200, 250, and 500 years. Common methods for estimation of peak-streamflow frequency for ungaged or unmonitored watersheds are regression equations for each recurrence interval developed for one or more regions; such regional equations are the subject of this report. The method is based on analysis of annual peak-streamflow data from U.S. Geological Survey streamflow-gaging stations (stations). Beginning in 2007, the U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, began a 3-year investigation concerning the development of regional equations to estimate annual peak-streamflow frequency for undeveloped watersheds in Texas. The investigation focuses primarily on 638 stations with 8 or more years of data from undeveloped watersheds and other criteria. The general approach is explicitly limited to the use of L-moment statistics, which are used in conjunction with a technique of multi-linear regression referred to as PRESS minimization. The approach used to develop the regional equations, which was refined during the investigation, is referred to as the 'L-moment-based, PRESS-minimized, residual-adjusted approach'. For the approach, seven unique distributions are fit to the sample L-moments of the data for each of 638 stations and trimmed means of the seven results of the distributions for each recurrence interval are used to define the station specific, peak-streamflow frequency. As a first iteration of regression, nine weighted-least-squares, PRESS-minimized, multi-linear regression equations are computed using the watershed characteristics of drainage area, dimensionless main-channel slope, and mean annual precipitation. The residuals of the nine equations are spatially mapped, and residuals for the 10-year recurrence interval are selected for generalization to 1-degree latitude and longitude quadrangles. The generalized residual is referred to as the OmegaEM parameter and represents a generalized terrain and climate index that expresses peak-streamflow potential not otherwise represented in the three watershed characteristics. The OmegaEM parameter was assigned to each station, and using OmegaEM, nine additional regression equations are computed. Because of favorable diagnostics, the OmegaEM equations are expected to be generally reliable estimators of peak-streamflow frequency for undeveloped and ungaged stream locations in Texas. The mean residual standard error, adjusted R-squared, and percentage reduction of PRESS by use of OmegaEM are 0.30log10, 0.86, and -21 percent, respectively. Inclusion of the OmegaEM parameter provides a substantial reduction in the PRESS statistic of the regression equations and removes considerable spatial dependency in regression residuals. Although the OmegaEM parameter requires interpretation on the part of analysts and the potential exists that different analysts could estimate different values for a given watershed, the authors suggest that typical uncertainty in the OmegaEM estimate might be about +or-0.1010. Finally, given the two ensembles of equations reported herein and those in previous reports, hydrologic design engineers and other analysts have several different methods, which represent different analytical tracks, to make comparisons of peak-streamflow frequency estimates for ungaged watersheds in the study area.

  2. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  3. Coherent-subspace array processing based on wavelet covariance: an application to broad-band, seismo-volcanic signals

    NASA Astrophysics Data System (ADS)

    Saccorotti, G.; Nisii, V.; Del Pezzo, E.

    2008-07-01

    Long-Period (LP) and Very-Long-Period (VLP) signals are the most characteristic seismic signature of volcano dynamics, and provide important information about the physical processes occurring in magmatic and hydrothermal systems. These events are usually characterized by sharp spectral peaks, which may span several frequency decades, by emergent onsets, and by a lack of clear S-wave arrivals. These two latter features make both signal detection and location a challenging task. In this paper, we propose a processing procedure based on Continuous Wavelet Transform of multichannel, broad-band data to simultaneously solve the signal detection and location problems. Our method consists of two steps. First, we apply a frequency-dependent threshold to the estimates of the array-averaged WCO in order to locate the time-frequency regions spanned by coherent arrivals. For these data, we then use the time-series of the complex wavelet coefficients for deriving the elements of the spatial Cross-Spectral Matrix. From the eigenstructure of this matrix, we eventually estimate the kinematic signals' parameters using the MUltiple SIgnal Characterization (MUSIC) algorithm. The whole procedure greatly facilitates the detection and location of weak, broad-band signals, in turn avoiding the time-frequency resolution trade-off and frequency leakage effects which affect conventional covariance estimates based upon Windowed Fourier Transform. The method is applied to explosion signals recorded at Stromboli volcano by either a short-period, small aperture antenna, or a large-aperture, broad-band network. The LP (0.2 < T < 2s) components of the explosive signals are analysed using data from the small-aperture array and under the plane-wave assumption. In this manner, we obtain a precise time- and frequency-localization of the directional properties for waves impinging at the array. We then extend the wavefield decomposition method using a spherical wave front model, and analyse the VLP components (T > 2s) of the explosion recordings from the broad-band network. Source locations obtained this way are fully compatible with those retrieved from application of more traditional (and computationally expensive) time-domain techniques, such as the Radial Semblance method.

  4. Suspension parameter estimation in the frequency domain using a matrix inversion approach

    NASA Astrophysics Data System (ADS)

    Thite, A. N.; Banvidi, S.; Ibicek, T.; Bennett, L.

    2011-12-01

    The dynamic lumped parameter models used to optimise the ride and handling of a vehicle require base values of the suspension parameters. These parameters are generally experimentally identified. The accuracy of identified parameters can depend on the measurement noise and the validity of the model used. The existing publications on suspension parameter identification are generally based on the time domain and use a limited degree of freedom. Further, the data used are either from a simulated 'experiment' or from a laboratory test on an idealised quarter or a half-car model. In this paper, a method is developed in the frequency domain which effectively accounts for the measurement noise. Additional dynamic constraining equations are incorporated and the proposed formulation results in a matrix inversion approach. The nonlinearities in damping are estimated, however, using a time-domain approach. Full-scale 4-post rig test data of a vehicle are used. The variations in the results are discussed using the modal resonant behaviour. Further, a method is implemented to show how the results can be improved when the matrix inverted is ill-conditioned. The case study shows a good agreement between the estimates based on the proposed frequency-domain approach and measurable physical parameters.

  5. Evaluation of a Validated Food Frequency Questionnaire for Self-Defined Vegans in the United States

    PubMed Central

    Dyett, Patricia; Rajaram, Sujatha; Haddad, Ella H.; Sabate, Joan

    2014-01-01

    This study aimed to develop and validate a de novo food frequency questionnaire for self-defined vegans in the United States. Diet histories from pilot samples of vegans and a modified ‘Block Method’ using seven selected nutrients of concern in vegan diet patterns, were employed to generate the questionnaire food list. Food frequency responses of 100 vegans from 19 different U.S. states were obtained via completed mailed questionnaires and compared to multiple telephone-conducted diet recall interviews. Computerized diet analyses were performed. Correlation coefficients, t-tests, rank, cross-tabulations, and probability tests were used to validate and compare intake estimates and dietary reference intake (DRI) assessment trends between the two methods. A 369-item vegan-specific questionnaire was developed with 252 listed food frequency items. Calorie-adjusted correlation coefficients ranged from r = 0.374 to 0.600 (p < 0.001) for all analyzed nutrients except calcium. Estimates, ranks, trends and higher-level participant percentile placements for Vitamin B12 were similar with both methods. Questionnaire intakes were higher than recalls for most other nutrients. Both methods demonstrated similar trends in DRI adequacy assessment (e.g., significantly inadequate vitamin D intake among vegans). This vegan-specific questionnaire can be a useful assessment tool for health screening initiatives in U.S. vegan communities. PMID:25006856

  6. A general methodology for inverse estimation of the elastic and anelastic properties of anisotropic open-cell porous materials—with application to a melamine foam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuenca, Jacques, E-mail: jcuenca@kth.se; Van der Kelen, Christophe; Göransson, Peter

    2014-02-28

    This paper proposes an inverse estimation method for the characterisation of the elastic and anelastic properties of the frame of anisotropic open-cell foams used for sound absorption. A model of viscoelasticity based on a fractional differential constitutive equation is used, leading to an augmented Hooke's law in the frequency domain, where the elastic and anelastic phenomena appear as distinctive terms in the stiffness matrix. The parameters of the model are nine orthotropic elastic moduli, three angles of orientation of the material principal directions and three parameters governing the anelastic frequency dependence. The inverse estimation consists in numerically fitting the modelmore » on a set of transfer functions extracted from a sample of material. The setup uses a seismic-mass measurement repeated in the three directions of space and is placed in a vacuum chamber in order to remove the air from the pores of the sample. The method allows to reconstruct the full frequency-dependent complex stiffness matrix of the frame of an anisotropic open-cell foam and in particular it provides the frequency of maximum energy dissipation by viscoelastic effects. The characterisation of a melamine foam sample is performed and the relation between the fractional-derivative model and other types of parameterisations of the augmented Hooke's law is discussed.« less

  7. A general methodology for inverse estimation of the elastic and anelastic properties of anisotropic open-cell porous materials—with application to a melamine foam

    NASA Astrophysics Data System (ADS)

    Cuenca, Jacques; Van der Kelen, Christophe; Göransson, Peter

    2014-02-01

    This paper proposes an inverse estimation method for the characterisation of the elastic and anelastic properties of the frame of anisotropic open-cell foams used for sound absorption. A model of viscoelasticity based on a fractional differential constitutive equation is used, leading to an augmented Hooke's law in the frequency domain, where the elastic and anelastic phenomena appear as distinctive terms in the stiffness matrix. The parameters of the model are nine orthotropic elastic moduli, three angles of orientation of the material principal directions and three parameters governing the anelastic frequency dependence. The inverse estimation consists in numerically fitting the model on a set of transfer functions extracted from a sample of material. The setup uses a seismic-mass measurement repeated in the three directions of space and is placed in a vacuum chamber in order to remove the air from the pores of the sample. The method allows to reconstruct the full frequency-dependent complex stiffness matrix of the frame of an anisotropic open-cell foam and in particular it provides the frequency of maximum energy dissipation by viscoelastic effects. The characterisation of a melamine foam sample is performed and the relation between the fractional-derivative model and other types of parameterisations of the augmented Hooke's law is discussed.

  8. A method for the estimate of the wall diffusion for non-axisymmetric fields using rotating external fields

    NASA Astrophysics Data System (ADS)

    Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.

    2013-08-01

    A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.

  9. Spatio-temporal characteristics of the extreme precipitation by L-moment-based index-flood method in the Yangtze River Delta region, China

    NASA Astrophysics Data System (ADS)

    Yin, Yixing; Chen, Haishan; Xu, Chong-Yu; Xu, Wucheng; Chen, Changchun; Sun, Shanlei

    2016-05-01

    The regionalization methods, which "trade space for time" by pooling information from different locations in the frequency analysis, are efficient tools to enhance the reliability of extreme quantile estimates. This paper aims at improving the understanding of the regional frequency of extreme precipitation by using regionalization methods, and providing scientific background and practical assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region. To achieve the main goals, L-moment-based index-flood (LMIF) method, one of the most popular regionalization methods, is used in the regional frequency analysis of extreme precipitation with special attention paid to inter-site dependence and its influence on the accuracy of quantile estimates, which has not been considered by most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence, and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, generalized extreme-value (GEV) and generalized normal (GNO) distributions were identified as the best fitted distributions for most of the sub-regions, and estimated quantiles for each region were obtained. Monte Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root-mean-square errors (RMSEs) were bigger and the 90 % error bounds were wider with inter-site dependence than those without inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with a return period of 100 years were finally obtained which indicated that there are two regions with highest precipitation extremes and a large region with low precipitation extremes. However, the regions with low precipitation extremes are the most developed and densely populated regions of the country, and floods will cause great loss of human life and property damage due to the high vulnerability. The study methods and procedure demonstrated in this paper will provide useful reference for frequency analysis of precipitation extremes in large regions, and the findings of the paper will be beneficial in flood control and management in the study area.

  10. Comparison of methods for the detection of gravitational waves from unknown neutron stars

    NASA Astrophysics Data System (ADS)

    Walsh, S.; Pitkin, M.; Oliver, M.; D'Antonio, S.; Dergachev, V.; Królak, A.; Astone, P.; Bejger, M.; Di Giovanni, M.; Dorosh, O.; Frasca, S.; Leaci, P.; Mastrogiovanni, S.; Miller, A.; Palomba, C.; Papa, M. A.; Piccinni, O. J.; Riles, K.; Sauter, O.; Sintes, A. M.

    2016-12-01

    Rapidly rotating neutron stars are promising sources of continuous gravitational wave radiation for the LIGO and Virgo interferometers. The majority of neutron stars in our galaxy have not been identified with electromagnetic observations. All-sky searches for isolated neutron stars offer the potential to detect gravitational waves from these unidentified sources. The parameter space of these blind all-sky searches, which also cover a large range of frequencies and frequency derivatives, presents a significant computational challenge. Different methods have been designed to perform these searches within acceptable computational limits. Here we describe the first benchmark in a project to compare the search methods currently available for the detection of unknown isolated neutron stars. The five methods compared here are individually referred to as the PowerFlux, sky Hough, frequency Hough, Einstein@Home, and time domain F -statistic methods. We employ a mock data challenge to compare the ability of each search method to recover signals simulated assuming a standard signal model. We find similar performance among the four quick-look search methods, while the more computationally intensive search method, Einstein@Home, achieves up to a factor of two higher sensitivity. We find that the absence of a second derivative frequency in the search parameter space does not degrade search sensitivity for signals with physically plausible second derivative frequencies. We also report on the parameter estimation accuracy of each search method, and the stability of the sensitivity in frequency and frequency derivative and in the presence of detector noise.

  11. Three-dimensional dominant frequency mapping using autoregressive spectral analysis of atrial electrograms of patients in persistent atrial fibrillation.

    PubMed

    Salinet, João L; Masca, Nicholas; Stafford, Peter J; Ng, G André; Schlindwein, Fernando S

    2016-03-08

    Areas with high frequency activity within the atrium are thought to be 'drivers' of the rhythm in patients with atrial fibrillation (AF) and ablation of these areas seems to be an effective therapy in eliminating DF gradient and restoring sinus rhythm. Clinical groups have applied the traditional FFT-based approach to generate the three-dimensional dominant frequency (3D DF) maps during electrophysiology (EP) procedures but literature is restricted on using alternative spectral estimation techniques that can have a better frequency resolution that FFT-based spectral estimation. Autoregressive (AR) model-based spectral estimation techniques, with emphasis on selection of appropriate sampling rate and AR model order, were implemented to generate high-density 3D DF maps of atrial electrograms (AEGs) in persistent atrial fibrillation (persAF). For each patient, 2048 simultaneous AEGs were recorded for 20.478 s-long segments in the left atrium (LA) and exported for analysis, together with their anatomical locations. After the DFs were identified using AR-based spectral estimation, they were colour coded to produce sequential 3D DF maps. These maps were systematically compared with maps found using the Fourier-based approach. 3D DF maps can be obtained using AR-based spectral estimation after AEGs downsampling (DS) and the resulting maps are very similar to those obtained using FFT-based spectral estimation (mean 90.23 %). There were no significant differences between AR techniques (p = 0.62). The processing time for AR-based approach was considerably shorter (from 5.44 to 5.05 s) when lower sampling frequencies and model order values were used. Higher levels of DS presented higher rates of DF agreement (sampling frequency of 37.5 Hz). We have demonstrated the feasibility of using AR spectral estimation methods for producing 3D DF maps and characterised their differences to the maps produced using the FFT technique, offering an alternative approach for 3D DF computation in human persAF studies.

  12. Hybrid method for determining the parameters of condenser microphones from measured membrane velocities and numerical calculations.

    PubMed

    Barrera-Figueroa, Salvador; Rasmussen, Knud; Jacobsen, Finn

    2009-10-01

    Typically, numerical calculations of the pressure, free-field, and random-incidence response of a condenser microphone are carried out on the basis of an assumed displacement distribution of the diaphragm of the microphone; the conventional assumption is that the displacement follows a Bessel function. This assumption is probably valid at frequencies below the resonance frequency. However, at higher frequencies the movement of the membrane is heavily coupled with the damping of the air film between membrane and backplate and with resonances in the back chamber of the microphone. A solution to this problem is to measure the velocity distribution of the membrane by means of a non-contact method, such as laser vibrometry. The measured velocity distribution can be used together with a numerical formulation such as the boundary element method for estimating the microphone response and other parameters, e.g., the acoustic center. In this work, such a hybrid method is presented and examined. The velocity distributions of a number of condenser microphones have been determined using a laser vibrometer, and these measured velocity distributions have been used for estimating microphone responses and other parameters. The agreement with experimental data is generally good. The method can be used as an alternative for validating the parameters of the microphones determined by classical calibration techniques.

  13. Separation of arteries and veins in the cerebral cortex using physiological oscillations by optical imaging of intrinsic signal

    NASA Astrophysics Data System (ADS)

    Hu, Dewen; Wang, Yucheng; Liu, Yadong; Li, Ming; Liu, Fayi

    2010-05-01

    An automated method is presented for artery-vein separation in cerebral cortical images recorded with optical imaging of the intrinsic signal. The vessel-type separation method is based on the fact that the spectral distribution of intrinsic physiological oscillations varies from arterial regions to venous regions. In arterial regions, the spectral power is higher in the heartbeat frequency (HF), whereas in venous regions, the spectral power is higher in the respiration frequency (RF). The separation method was begun by extracting the vascular network and its centerline. Then the spectra of the optical intrinsic signals were estimated by the multitaper method. A standard F-test was performed on each discrete frequency point to test the statistical significance at the given level. Four periodic physiological oscillations were examined: HF, RF, and two other eigenfrequencies termed F1 and F2. The separation of arteries and veins was implemented with the fuzzy c-means clustering method and the region-growing approach by utilizing the spectral amplitudes and power-ratio values of the four eigenfrequencies on the vasculature. Subsequently, independent spectral distributions in the arteries, veins, and capillary bed were estimated for comparison, which showed that the spectral distributions of the intrinsic signals were very distinct between the arterial and venous regions.

  14. Separation of arteries and veins in the cerebral cortex using physiological oscillations by optical imaging of intrinsic signal.

    PubMed

    Hu, Dewen; Wang, Yucheng; Liu, Yadong; Li, Ming; Liu, Fayi

    2010-01-01

    An automated method is presented for artery-vein separation in cerebral cortical images recorded with optical imaging of the intrinsic signal. The vessel-type separation method is based on the fact that the spectral distribution of intrinsic physiological oscillations varies from arterial regions to venous regions. In arterial regions, the spectral power is higher in the heartbeat frequency (HF), whereas in venous regions, the spectral power is higher in the respiration frequency (RF). The separation method was begun by extracting the vascular network and its centerline. Then the spectra of the optical intrinsic signals were estimated by the multitaper method. A standard F-test was performed on each discrete frequency point to test the statistical significance at the given level. Four periodic physiological oscillations were examined: HF, RF, and two other eigenfrequencies termed F1 and F2. The separation of arteries and veins was implemented with the fuzzy c-means clustering method and the region-growing approach by utilizing the spectral amplitudes and power-ratio values of the four eigenfrequencies on the vasculature. Subsequently, independent spectral distributions in the arteries, veins, and capillary bed were estimated for comparison, which showed that the spectral distributions of the intrinsic signals were very distinct between the arterial and venous regions.

  15. Complex frequency analysis Tornillo earthquake Lokon Volcano in North Sulawesi period 1 January-17 March 2016

    NASA Astrophysics Data System (ADS)

    Hasanah, Intan; Syahbana, Devy Kamil; Santoso, Agus; Palupi, Indriati Retno

    2017-07-01

    Indonesia consists of 127 active volcanoes, that causing Indonesia has a very active seismic activity. The observed temporal variation in the complex frequency analysis of Tornillo earthquake in this study at Lokon Volcano, North Sulawesi occured during the period from January 1 to March 17, 2016. This research was conducted using the SOMPI method, with parameters of complex frequency is oscillation frequency (f) and decay coda character of wave (Q Factor). The purpose of this research was to understand the condition of dynamics of fluids inside Lokon Volcano in it's period. The analysis was based on the Sompi homogeneous equation Auto-Regressive (AR). The results of this study were able to estimate the dynamics of fluids inside Lokon Volcano and identify the content of the fluid and dynamics dimension crust. Where the Tornillo earthquake in this period has a value of Q (decay waves) are distributed under 200 and frequency distributed between 3-4 Hz. Tornillo earthquake was at a shallow depth of less than 2 km and paraded to the Tompaluan Crater. From the analysis of complex frequencies, it can be estimated if occured an eruption at Lokon Volcano in it's period, the estimated type of eruption was phreatic eruption. With an estimated composition of the fluid in the form of Misty Gas a mass fraction of gas ranging between 0-100%. Another possible fluid contained in Lokon Volcano is water vapor with the gas volume fraction range 10-90%.

  16. Complex mode indication function and its applications to spatial domain parameter estimation

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.

  17. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    PubMed

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  18. Estimating flood magnitude and frequency for urban and small, rural streams in Georgia, South Carolina, and North Carolina, 2011

    USGS Publications Warehouse

    Feaster, Toby D.; Gotvald, Anthony J.; Weaver, J. Curtis

    2014-01-01

    Reliable estimates of the magnitude and frequency of floods are essential for the design of transportation and water-conveyance structures, flood insurance studies, and flood-plain management. Flood-frequency estimates are particularly important in densely populated urban areas. The U.S. Geological Survey (USGS) used a multistate approach to update methods for determining the magnitude and frequency of floods in urban and small, rural streams that are not substantially affected by regulation or tidal fluctuations in Georgia, South Carolina, and North Carolina (Feaster and others, 2014). The multistate approach has the advantage over a single state approach of increasing the number of streamflow-gaging station (streamgages) available for analysis, expanding the geographical coverage that would allow for application of regional regression equations across state boundaries, and building on a previous flood-frequency investigation of rural streamgages in the Southeastern United States. This investigation was funded as part of a cooperative program of water-resources investigations between the USGS, the South Carolina Department of Transportation, and the North Carolina Department of Transportation. In addition, much of the data and information for the Georgia streamgages was funded through a similar cooperative program with the Georgia Department of Transportation.

  19. A satellite-based radar wind sensor

    NASA Technical Reports Server (NTRS)

    Xin, Weizhuang

    1991-01-01

    The objective is to investigate the application of Doppler radar systems for global wind measurement. A model of the satellite-based radar wind sounder (RAWS) is discussed, and many critical problems in the designing process, such as the antenna scan pattern, tracking the Doppler shift caused by satellite motion, and backscattering of radar signals from different types of clouds, are discussed along with their computer simulations. In addition, algorithms for measuring mean frequency of radar echoes, such as the Fast Fourier Transform (FFT) estimator, the covariance estimator, and the estimators based on autoregressive models, are discussed. Monte Carlo computer simulations were used to compare the performance of these algorithms. Anti-alias methods are discussed for the FFT and the autoregressive methods. Several algorithms for reducing radar ambiguity were studied, such as random phase coding methods and staggered pulse repitition frequncy (PRF) methods. Computer simulations showed that these methods are not applicable to the RAWS because of the broad spectral widths of the radar echoes from clouds. A waveform modulation method using the concept of spread spectrum and correlation detection was developed to solve the radar ambiguity. Radar ambiguity functions were used to analyze the effective signal-to-noise ratios for the waveform modulation method. The results showed that, with suitable bandwidth product and modulation of the waveform, this method can achieve the desired maximum range and maximum frequency of the radar system.

  20. Frequency of Guns in the Households of High School Seniors

    ERIC Educational Resources Information Center

    Coker, Ann L.; Bush, Heather M.; Follingstad, Diane R.; Brancato, Candace J.

    2017-01-01

    Background: In 2013, President Obama lifted the federal ban on gun violence research. The current study provides one of the first reports to estimate household gun ownership as reported by youth. Methods: In this cohort study of 3,006 high school seniors from 24 schools, we examined the frequency of household guns ownership. Results: About 65%…

  1. Pulse-echo sound speed estimation using second order speckle statistics

    NASA Astrophysics Data System (ADS)

    Rosado-Mendez, Ivan M.; Nam, Kibo; Madsen, Ernest L.; Hall, Timothy J.; Zagzebski, James A.

    2012-10-01

    This work presents a phantom-based evaluation of a method for estimating soft-tissue speeds of sound using pulse-echo data. The method is based on the improvement of image sharpness as the sound speed value assumed during beamforming is systematically matched to the tissue sound speed. The novelty of this work is the quantitative assessment of image sharpness by measuring the resolution cell size from the autocovariance matrix for echo signals from a random distribution of scatterers thus eliminating the need of strong reflectors. Envelope data were obtained from a fatty-tissue mimicking (FTM) phantom (sound speed = 1452 m/s) and a nonfatty-tissue mimicking (NFTM) phantom (1544 m/s) scanned with a linear array transducer on a clinical ultrasound system. Dependence on pulse characteristics was tested by varying the pulse frequency and amplitude. On average, sound speed estimation errors were -0.7% for the FTM phantom and -1.1% for the NFTM phantom. In general, no significant difference was found among errors from different pulse frequencies and amplitudes. The method is currently being optimized for the differentiation of diffuse liver diseases.

  2. A prevalence-based association test for case-control studies.

    PubMed

    Ryckman, Kelli K; Jiang, Lan; Li, Chun; Bartlett, Jacquelaine; Haines, Jonathan L; Williams, Scott M

    2008-11-01

    Genetic association is often determined in case-control studies by the differential distribution of alleles or genotypes. Recent work has demonstrated that association can also be assessed by deviations from the expected distributions of alleles or genotypes. Specifically, multiple methods motivated by the principles of Hardy-Weinberg equilibrium (HWE) have been developed. However, these methods do not take into account many of the assumptions of HWE. Therefore, we have developed a prevalence-based association test (PRAT) as an alternative method for detecting association in case-control studies. This method, also motivated by the principles of HWE, uses an estimated population allele frequency to generate expected genotype frequencies instead of using the case and control frequencies separately. Our method often has greater power, under a wide variety of genetic models, to detect association than genotypic, allelic or Cochran-Armitage trend association tests. Therefore, we propose PRAT as a powerful alternative method of testing for association.

  3. ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition.

    PubMed

    Koslicki, David; Chatterjee, Saikat; Shahrivar, Damon; Walker, Alan W; Francis, Suzanna C; Fraser, Louise J; Vehkaperä, Mikko; Lan, Yueheng; Corander, Jukka

    2015-01-01

    Estimation of bacterial community composition from high-throughput sequenced 16S rRNA gene amplicons is a key task in microbial ecology. Since the sequence data from each sample typically consist of a large number of reads and are adversely impacted by different levels of biological and technical noise, accurate analysis of such large datasets is challenging. There has been a recent surge of interest in using compressed sensing inspired and convex-optimization based methods to solve the estimation problem for bacterial community composition. These methods typically rely on summarizing the sequence data by frequencies of low-order k-mers and matching this information statistically with a taxonomically structured database. Here we show that the accuracy of the resulting community composition estimates can be substantially improved by aggregating the reads from a sample with an unsupervised machine learning approach prior to the estimation phase. The aggregation of reads is a pre-processing approach where we use a standard K-means clustering algorithm that partitions a large set of reads into subsets with reasonable computational cost to provide several vectors of first order statistics instead of only single statistical summarization in terms of k-mer frequencies. The output of the clustering is then processed further to obtain the final estimate for each sample. The resulting method is called Aggregation of Reads by K-means (ARK), and it is based on a statistical argument via mixture density formulation. ARK is found to improve the fidelity and robustness of several recently introduced methods, with only a modest increase in computational complexity. An open source, platform-independent implementation of the method in the Julia programming language is freely available at https://github.com/dkoslicki/ARK. A Matlab implementation is available at http://www.ee.kth.se/ctsoftware.

  4. A Method for Rapid Measurement of Contrast Sensitivity on Mobile Touch-Screens

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    2016-01-01

    Touch-screen displays in cell phones and tablet computers are now pervasive, making them an attractive option for vision testing outside of the laboratory or clinic. Here we de- scribe a novel method in which subjects use a finger swipe to indicate the transition from visible to invisible on a grating which is swept in both contrast and frequency. Because a single image can be swiped in about a second, it is practical to use a series of images to zoom in on particular ranges of contrast or frequency, both to increase the accuracy of the measurements and to obtain an estimate of the reliability of the subject. Sensitivities to chromatic and spatio-temporal modulations are easily measured using the same method. A proto- type has been developed for Apple Computer's iPad/iPod/iPhone family of devices, implemented using an open-source scripting environment known as QuIP (QUick Image Processing, http://hsi.arc.nasa.gov/groups/scanpath/research.php). Preliminary data show good agreement with estimates obtained from traditional psychophysical methods as well as newer rapid estimation techniques. Issues relating to device calibration are also discussed.

  5. Evaluation of Approaches to Deal with Low-Frequency Nuisance Covariates in Population Pharmacokinetic Analyses.

    PubMed

    Lagishetty, Chakradhar V; Duffull, Stephen B

    2015-11-01

    Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.

  6. Euphausiid distribution along the Western Antarctic Peninsula—Part A: Development of robust multi-frequency acoustic techniques to identify euphausiid aggregations and quantify euphausiid size, abundance, and biomass

    NASA Astrophysics Data System (ADS)

    Lawson, Gareth L.; Wiebe, Peter H.; Stanton, Timothy K.; Ashjian, Carin J.

    2008-02-01

    Methods were refined and tested for identifying the aggregations of Antarctic euphausiids ( Euphausia spp.) and then estimating euphausiid size, abundance, and biomass, based on multi-frequency acoustic survey data. A threshold level of volume backscattering strength for distinguishing euphausiid aggregations from other zooplankton was derived on the basis of published measurements of euphausiid visual acuity and estimates of the minimum density of animals over which an individual can maintain visual contact with its nearest neighbor. Differences in mean volume backscattering strength at 120 and 43 kHz further served to distinguish euphausiids from other sources of scattering. An inversion method was then developed to estimate simultaneously the mean length and density of euphausiids in these acoustically identified aggregations based on measurements of mean volume backscattering strength at four frequencies (43, 120, 200, and 420 kHz). The methods were tested at certain locations within an acoustically surveyed continental shelf region in and around Marguerite Bay, west of the Antarctic Peninsula, where independent evidence was also available from net and video systems. Inversion results at these test sites were similar to net samples for estimated length, but acoustic estimates of euphausiid density exceeded those from nets by one to two orders of magnitude, likely due primarily to avoidance and to a lesser extent to differences in the volumes sampled by the two systems. In a companion study, these methods were applied to the full acoustic survey data in order to examine the distribution of euphausiids in relation to aspects of the physical and biological environment [Lawson, G.L., Wiebe, P.H., Ashjian, C.J., Stanton, T.K., 2008. Euphausiid distribution along the Western Antarctic Peninsula—Part B: Distribution of euphausiid aggregations and biomass, and associations with environmental features. Deep-Sea Research II, this issue [doi:10.1016/j.dsr2.2007.11.014

  7. A Statistical Framework for the Functional Analysis of Metagenomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharon, Itai; Pati, Amrita; Markowitz, Victor

    2008-10-01

    Metagenomic studies consider the genetic makeup of microbial communities as a whole, rather than their individual member organisms. The functional and metabolic potential of microbial communities can be analyzed by comparing the relative abundance of gene families in their collective genomic sequences (metagenome) under different conditions. Such comparisons require accurate estimation of gene family frequencies. They present a statistical framework for assessing these frequencies based on the Lander-Waterman theory developed originally for Whole Genome Shotgun (WGS) sequencing projects. They also provide a novel method for assessing the reliability of the estimations which can be used for removing seemingly unreliable measurements.more » They tested their method on a wide range of datasets, including simulated genomes and real WGS data from sequencing projects of whole genomes. Results suggest that their framework corrects inherent biases in accepted methods and provides a good approximation to the true statistics of gene families in WGS projects.« less

  8. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  9. Mass detection, localization and estimation for wind turbine blades based on statistical pattern recognition

    NASA Astrophysics Data System (ADS)

    Colone, L.; Hovgaard, M. K.; Glavind, L.; Brincker, R.

    2018-07-01

    A method for mass change detection on wind turbine blades using natural frequencies is presented. The approach is based on two statistical tests. The first test decides if there is a significant mass change and the second test is a statistical group classification based on Linear Discriminant Analysis. The frequencies are identified by means of Operational Modal Analysis using natural excitation. Based on the assumption of Gaussianity of the frequencies, a multi-class statistical model is developed by combining finite element model sensitivities in 10 classes of change location on the blade, the smallest area being 1/5 of the span. The method is experimentally validated for a full scale wind turbine blade in a test setup and loaded by natural wind. Mass change from natural causes was imitated with sand bags and the algorithm was observed to perform well with an experimental detection rate of 1, localization rate of 0.88 and mass estimation rate of 0.72.

  10. Perception of fore-and-aft whole-body vibration intensity measured by two methods.

    PubMed

    Forta, Nazım Gizem; Schust, Marianne

    2015-01-01

    This experimental study investigated the perception of fore-and-aft whole-body vibration intensity using cross-modality matching (CM) and magnitude estimation (ME) methods. Thirteen subjects were seated on a rigid seat without a backrest and exposed to sinusoidal stimuli from 0.8 to 12.5 Hz and 0.4 to 1.6 ms(-2) r.m.s. The Stevens exponents did not significantly depend on vibration frequency or the measurement method. The ME frequency weightings depended significantly on vibration frequency, but the CM weightings did not. Using the CM and ME weightings would result in higher weighted exposures than those calculated using the ISO (2631-1, 1997) Wd. Compared with ISO Wk, the CM and ME-weighted exposures would be greater at 1.6 Hz and lesser above that frequency. The CM and ME frequency weightings based on the median ratings for the reference vibration condition did not differ significantly. The lack of a method effect for weightings and for Stevens exponents suggests that the findings from the two methods are comparable. Frequency weighting curves for seated subjects for x-axis whole-body vibration were derived from an experiment using two different measurement methods and were compared with the Wd and Wk weighting curves in ISO 2631-1 (1997).

  11. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    PubMed

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  12. Joint Entropy for Space and Spatial Frequency Domains Estimated from Psychometric Functions of Achromatic Discrimination

    PubMed Central

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158

  13. Joint entropy for space and spatial frequency domains estimated from psychometric functions of achromatic discrimination.

    PubMed

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.

  14. Automated Method of Frequency Determination in Software Metric Data Through the Use of the Multiple Signal Classification (MUSIC) Algorithm

    DTIC Science & Technology

    1998-06-26

    METHOD OF FREQUENCY DETERMINATION 4 IN SOFTWARE METRIC DATA THROUGH THE USE OF THE 5 MULTIPLE SIGNAL CLASSIFICATION ( MUSIC ) ALGORITHM 6 7 STATEMENT OF...graph showing the estimated power spectral 12 density (PSD) generated by the multiple signal classification 13 ( MUSIC ) algorithm from the data set used...implemented in this module; however, it is preferred to use 1 the Multiple Signal Classification ( MUSIC ) algorithm. The MUSIC 2 algorithm is

  15. Detection and imaging of moving objects with SAR by a joint space-time-frequency processing

    NASA Astrophysics Data System (ADS)

    Barbarossa, Sergio; Farina, Alfonso

    This paper proposes a joint spacetime-frequency processing scheme for the detection and imaging of moving targets by Synthetic Aperture Radars (SAR). The method is based on the availability of an array antenna. The signals received by the array elements are combined, in a spacetime processor, to cancel the clutter. Then, they are analyzed in the time-frequency domain, by computing their Wigner-Ville Distribution (WVD), in order to estimate the instantaneous frequency, to be used for the successive phase compensation, necessary to produce a high resolution image.

  16. Multiple Input Design for Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene

    2003-01-01

    A method for designing multiple inputs for real-time dynamic system identification in the frequency domain was developed and demonstrated. The designed inputs are mutually orthogonal in both the time and frequency domains, with reduced peak factors to provide good information content for relatively small amplitude excursions. The inputs are designed for selected frequency ranges, and therefore do not require a priori models. The experiment design approach was applied to identify linear dynamic models for the F-15 ACTIVE aircraft, which has multiple control effectors.

  17. Time-frequency and advanced frequency estimation techniques for the investigation of bat echolocation calls.

    PubMed

    Kopsinis, Yannis; Aboutanios, Elias; Waters, Dean A; McLaughlin, Steve

    2010-02-01

    In this paper, techniques for time-frequency analysis and investigation of bat echolocation calls are studied. Particularly, enhanced resolution techniques are developed and/or used in this specific context for the first time. When compared to traditional time-frequency representation methods, the proposed techniques are more capable of showing previously unseen features in the structure of bat echolocation calls. It should be emphasized that although the study is focused on bat echolocation recordings, the results are more general and applicable to many other types of signal.

  18. Techniques to improve the accuracy of noise power spectrum measurements in digital x-ray imaging based on background trends removal.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2011-03-01

    Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two uniform exposure images obtained the worst result on the smoothness of NPS curve, although it was effective in low-frequency systematic rise suppression. Subtraction of a 2-D first-order fit to the image was also identified effective for background detrending, but it was worse than subtraction of a 2-D second-order polynomial fit to the image according to the authors' digital x-ray system. As a result of this study, the authors verified that it is necessary and feasible to get better NPS estimate by appropriate background trend removal. Subtraction of a 2-D second-order polynomial fit to the image was the most appropriate technique for background detrending without consideration of processing time.

  19. Evaluation of Simultaneous Multisine Excitation of the Joined Wing SensorCraft Aeroelastic Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Morelli, Eugene A.

    2011-01-01

    Multiple mutually orthogonal signals comprise excitation data sets for aeroservoelastic system identification. A multisine signal is a sum of harmonic sinusoid components. A set of these signals is made orthogonal by distribution of the frequency content such that each signal contains unique frequencies. This research extends the range of application of an excitation method developed for stability and control flight testing to aeroservoelastic modeling from wind tunnel testing. Wind tunnel data for the Joined Wing SensorCraft model validates this method, demonstrating that these signals applied simultaneously reproduce the frequency response estimates achieved from one-at-a-time excitation.

  20. Biased estimation of forest log characteristics using intersect diameters

    Treesearch

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton

    2009-01-01

    Logs are an important structural feature of forest ecosystems, and their abundance affects many resources and forest processes, including fire regimes, soil productivity, silviculture, carbon cycling, and wildlife habitat. Consequently, logs are often sampled to estimate their frequency, percent cover, volume, and weight. The line-intersect method (LIM) is one of the...

  1. Frequency-dependent Lg-wave attenuation in northern Morocco

    NASA Astrophysics Data System (ADS)

    Noriega, Raquel; Ugalde, Arantza; Villaseñor, Antonio; Harnafi, Mimoun

    2015-11-01

    Frequency-dependent attenuation (Q- 1) in the crust of northern Morocco is estimated from Lg-wave spectral amplitude measurements every quarter octave in the frequency band 0.8 to 8 Hz. This study takes advantage of the improved broadband data coverage in the region provided by the deployment of the IberArray seismic network. Earthquake data consist of 71 crustal events with magnitudes 4 ≤ mb ≤ 5.5 recorded on 110 permanent and temporary seismic stations between January 2008 and December 2013 with hypocentral distances between 100 and 900 km. 1274 high-quality Lg waveforms provide dense path coverage of northern Morocco, crossing a region with a complex structure and heterogeneous tectonic setting as a result of continuous interactions between the African and Eurasian plates. We use two different methods: the coda normalization (CN) analysis, that allows removal of the source and site effects from the Lg spectra, and the spectral amplitude decay (SAD) method, that simultaneously inverts for source, site, and path attenuation terms. The CN and SAD methods return similar results, indicating that the Lg Q models are robust to differences in the methodologies. Larger errors and no significant frequency dependence are observed for frequencies lower than 1.5 Hz. For distances up to 400 km and the frequency band 1.5 ≤ ƒ (Hz) ≤ 4.5, the model functions Q(f) = (529- 22+ 23)(f/1.5)0.23 ± 0.06 and Q(f) = (457- 7+ 7)(f/1.5)0.44 ± 0.02 are obtained using the CN and SAD methods, respectively. A change in the frequency dependence is observed above 4.5 Hz for both methods which may be related to the influence of the Sn energy on the Lg window. The frequency-dependent Q- 1 estimates represent an average attenuation beneath a broad region including the Rif and Tell mountains, the Moroccan and Algerian mesetas, the Atlas Mountains and the Sahara Platform structural domains, and correlate well with areas of moderate seismicity where intermediate Q values have been obtained.

  2. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  3. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  4. Random Decrement Method and Modeling H/V Spectral Ratios: An Application for Soft Shallow Layers Characterization

    NASA Astrophysics Data System (ADS)

    Song, H.; Huerta-Lopez, C. I.; Martinez-Cruzado, J. A.; Rodriguez-Lozoya, H. E.; Espinoza-Barreras, F.

    2009-05-01

    Results of an ongoing study to estimate the ground response upon weak and moderate earthquake excitations are presented. A reliable site characterization in terms of its soil properties and sub-soil layer configuration are parameters required in order to do a trustworthy estimation of the ground response upon dynamic loads. This study can be described by the following four steps: (1) Ambient noise measurements were collected at the study site where a bridge was under construction between the cities of Tijuana and Ensenada in Mexico. The time series were collected using a six channels recorder with an ADC converter of 16 bits within a maximum voltage range of ± 2.5 V, the recorder has an optional settings of: Butterworth/Bessel filters, gain and sampling rate. The sensors were a three orthogonal component (X, Y, Z) accelerometers with a sensitivity of 20 V/g, flat frequency response between DC to 200 Hz, and total full range of ±0.25 of g, (2) experimental H/V Spectral Ratios were computed to estimate the fundamental vibration frequency at the site, (3) using the time domain experimental H/V spectral ratios as well as the original recorded time series, the random decrement method was applied to estimate the fundamental frequency and damping of the site (system), and (4) finally the theoretical H/V spectral ratios were obtained by means of the stiffness matrix wave propagation method.. The interpretation of the obtained results was then finally compared with a geotechnical study available at the site.

  5. Reliable clarity automatic-evaluation method for optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  6. Astronomical component estimation (ACE v.1) by time-variant sinusoidal modeling

    NASA Astrophysics Data System (ADS)

    Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan

    2016-09-01

    Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on (fast) Fourier transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic can make it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. This drawback is circumvented by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach was proven useful to characterize audio signals (music and speech), which are non-stationary in nature. Paleoclimate proxy signals and audio signals share similar dynamics; the only difference is the frequency relationship between the different components. A harmonic-frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, this difference is irrelevant for the problem of separating simultaneous changes in amplitude and frequency. Using an approach with overlapping analysis frames, the model (Astronomical Component Estimation, version 1: ACE v.1) captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretations, whereas the latter are estimated by means of linear least-squares. As output, the model provides the orbital component waveform, either in the depth or time domain. Uncertainty analyses of the model estimates are performed using Monte Carlo simulations. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns reconstruct changes in accumulation rate, whereas amplitude modulation identifies eccentricity-modulated precession. The functioning of the time-variant sinusoidal model is illustrated and validated using a synthetic insolation signal. The new modeling approach is tested on two case studies: (1) a Pliocene-Pleistocene benthic δ18O record from Ocean Drilling Program (ODP) Site 846 and (2) a Danian magnetic susceptibility record from the Contessa Highway section, Gubbio, Italy.

  7. Method of estimating flood-frequency parameters for streams in Idaho

    USGS Publications Warehouse

    Kjelstrom, L.C.; Moffatt, R.L.

    1981-01-01

    Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)

  8. Estimation of flood-frequency characteristics of small urban streams in North Carolina

    USGS Publications Warehouse

    Robbins, J.C.; Pope, B.F.

    1996-01-01

    A statewide study was conducted to develop methods for estimating the magnitude and frequency of floods of small urban streams in North Carolina. This type of information is critical in the design of bridges, culverts and water-control structures, establishment of flood-insurance rates and flood-plain regulation, and for other uses by urban planners and engineers. Concurrent records of rainfall and runoff data collected in small urban basins were used to calibrate rainfall-runoff models. Historic rain- fall records were used with the calibrated models to synthesize a long- term record of annual peak discharges. The synthesized record of annual peak discharges were used in a statistical analysis to determine flood- frequency distributions. These frequency distributions were used with distributions from previous investigations to develop a database for 32 small urban basins in the Blue Ridge-Piedmont, Sand Hills, and Coastal Plain hydrologic areas. The study basins ranged in size from 0.04 to 41.0 square miles. Data describing the size and shape of the basin, level of urban development, and climate and rural flood charac- teristics also were included in the database. Estimation equations were developed by relating flood-frequency char- acteristics to basin characteristics in a generalized least-squares regression analysis. The most significant basin characteristics are drainage area, impervious area, and rural flood discharge. The model error and prediction errors for the estimating equations were less than those for the national flood-frequency equations previously reported. Resulting equations, which have prediction errors generally less than 40 percent, can be used to estimate flood-peak discharges for 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals for small urban basins across the State assuming negligible, sustainable, in- channel detention or basin storage.

  9. A new positive relationship between pCO2 and stomatal frequency in Quercus guyavifolia (Fagaceae): a potential proxy for palaeo-CO2 levels

    PubMed Central

    Hu, Jin-Jin; Xing, Yao-Wu; Turkington, Roy; Jacques, Frédéric M. B.; Su, Tao; Huang, Yong-Jiang; Zhou, Zhe-Kun

    2015-01-01

    Background and Aims The inverse relationship between atmospheric CO2 partial pressure (pCO2) and stomatal frequency in many species of plants has been widely used to estimate palaeoatmospheric CO2 (palaeo-CO2) levels; however, the results obtained have been quite variable. This study attempts to find a potential new proxy for palaeo-CO2 levels by analysing stomatal frequency in Quercus guyavifolia (Q. guajavifolia, Fagaceae), an extant dominant species of sclerophyllous forests in the Himalayas with abundant fossil relatives. Methods Stomatal frequency was analysed for extant samples of Q. guyavifolia collected from17 field sites at altitudes ranging between 2493 and 4497 m. Herbarium specimens collected between 1926 and 2011 were also examined. Correlations of pCO2–stomatal frequency were determined using samples from both sources, and these were then applied to Q. preguyavaefolia fossils in order to estimate palaeo-CO2 concentrations for two late-Pliocene floras in south-western China. Key Results In contrast to the negative correlations detected for most other species that have been studied, a positive correlation between pCO2 and stomatal frequency was determined in Q. guyavifolia sampled from both extant field collections and historical herbarium specimens. Palaeo-CO2 concentrations were estimated to be approx. 180–240 ppm in the late Pliocene, which is consistent with most other previous estimates. Conclusions A new positive relationship between pCO2 and stomatal frequency in Q. guyavifolia is presented, which can be applied to the fossils closely related to this species that are widely distributed in the late-Cenozoic strata in order to estimate palaeo-CO2 concentrations. The results show that it is valid to use a positive relationship to estimate palaeo-CO2 concentrations, and the study adds to the variety of stomatal density/index relationships that available for estimating pCO2. The physiological mechanisms underlying this positive response are unclear, however, and require further research. PMID:25681824

  10. A seismic coherency method using spectral amplitudes

    NASA Astrophysics Data System (ADS)

    Sui, Jing-Kun; Zheng, Xiao-Dong; Li, Yan-Dong

    2015-09-01

    Seismic coherence is used to detect discontinuities in underground media. However, strata with steeply dipping structures often produce false low coherence estimates and thus incorrect discontinuity characterization results. It is important to eliminate or reduce the effect of dipping on coherence estimates. To solve this problem, time-domain dip scanning is typically used to improve estimation of coherence in areas with steeply dipping structures. However, the accuracy of the time-domain estimation of dip is limited by the sampling interval. In contrast, the spectrum amplitude is not affected by the time delays in adjacent seismic traces caused by dipping structures. We propose a coherency algorithm that uses the spectral amplitudes of seismic traces within a predefined analysis window to construct the covariance matrix. The coherency estimates with the proposed algorithm is defined as the ratio between the dominant eigenvalue and the sum of all eigenvalues of the constructed covariance matrix. Thus, we eliminate the effect of dipping structures on coherency estimates. In addition, because different frequency bands of spectral amplitudes are used to estimate coherency, the proposed algorithm has multiscale features. Low frequencies are effective for characterizing large-scale faults, whereas high frequencies are better in characterizing small-scale faults. Application to synthetic and real seismic data show that the proposed algorithm can eliminate the effect of dip and produce better coherence estimates than conventional coherency algorithms in areas with steeply dipping structures.

  11. Characterization of turbulence stability through the identification of multifractional Brownian motions

    NASA Astrophysics Data System (ADS)

    Lee, K. C.

    2013-02-01

    Multifractional Brownian motions have become popular as flexible models in describing real-life signals of high-frequency features in geoscience, microeconomics, and turbulence, to name a few. The time-changing Hurst exponent, which describes regularity levels depending on time measurements, and variance, which relates to an energy level, are two parameters that characterize multifractional Brownian motions. This research suggests a combined method of estimating the time-changing Hurst exponent and variance using the local variation of sampled paths of signals. The method consists of two phases: initially estimating global variance and then accurately estimating the time-changing Hurst exponent. A simulation study shows its performance in estimation of the parameters. The proposed method is applied to characterization of atmospheric stability in which descriptive statistics from the estimated time-changing Hurst exponent and variance classify stable atmosphere flows from unstable ones.

  12. State Estimation Using Dependent Evidence Fusion: Application to Acoustic Resonance-Based Liquid Level Measurement.

    PubMed

    Xu, Xiaobin; Li, Zhenghui; Li, Guo; Zhou, Zhe

    2017-04-21

    Estimating the state of a dynamic system via noisy sensor measurement is a common problem in sensor methods and applications. Most state estimation methods assume that measurement noise and state perturbations can be modeled as random variables with known statistical properties. However in some practical applications, engineers can only get the range of noises, instead of the precise statistical distributions. Hence, in the framework of Dempster-Shafer (DS) evidence theory, a novel state estimatation method by fusing dependent evidence generated from state equation, observation equation and the actual observations of the system states considering bounded noises is presented. It can be iteratively implemented to provide state estimation values calculated from fusion results at every time step. Finally, the proposed method is applied to a low-frequency acoustic resonance level gauge to obtain high-accuracy measurement results.

  13. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  14. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Ultrasound shear wave simulation based on nonlinear wave propagation and Wigner-Ville Distribution analysis

    NASA Astrophysics Data System (ADS)

    Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan

    2017-03-01

    This paper presents a method for modeling and simulation of shear wave generation from a nonlinear Acoustic Radiation Force Impulse (ARFI) that is considered as a distributed force applied at the focal region of a HIFU transducer radiating in nonlinear regime. The shear wave propagation is simulated by solving the Navier's equation from the distributed nonlinear ARFI as the source of the shear wave. Then, the Wigner-Ville Distribution (WVD) as a time-frequency analysis method is used to detect the shear wave at different local points in the region of interest. The WVD results in an estimation of the shear wave time of arrival, its mean frequency and local attenuation which can be utilized to estimate medium's shear modulus and shear viscosity using the Voigt model.

  16. The application of the statistical theory of extreme values to gust-load problems

    NASA Technical Reports Server (NTRS)

    Press, Harry

    1950-01-01

    An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)

  17. Fluid-structure interaction in fast breeder reactors

    NASA Astrophysics Data System (ADS)

    Mitra, A. A.; Manik, D. N.; Chellapandi, P. A.

    2004-05-01

    A finite element model for the seismic analysis of a scaled down model of Fast breeder reactor (FBR) main vessel is proposed to be established. The reactor vessel, which is a large shell structure with a relatively thin wall, contains a large volume of sodium coolant. Therefore, the fluid structure interaction effects must be taken into account in the seismic design. As part of studying fluid-structure interaction, the fundamental frequency of vibration of a circular cylindrical shell partially filled with a liquid has been estimated using Rayleigh's method. The bulging and sloshing frequencies of the first four modes of the aforementioned system have been estimated using the Rayleigh-Ritz method. The finite element formulation of the axisymmetric fluid element with Fourier option (required due to seismic loading) is also presented.

  18. Water-Tree Modelling and Detection for Underground Cables

    NASA Astrophysics Data System (ADS)

    Chen, Qi

    In recent years, aging infrastructure has become a major concern for the power industry. Since its inception in early 20th century, the electrical system has been the cornerstone of an industrial society. Stable and uninterrupted delivery of electrical power is now a base necessity for the modern world. As the times march-on, however, the electrical infrastructure ages and there is the inevitable need to renew and replace the existing system. Unfortunately, due to time and financial constraints, many electrical systems today are forced to operate beyond their original design and power utilities must find ways to prolong the lifespan of older equipment. Thus, the concept of preventative maintenance arises. Preventative maintenance allows old equipment to operate longer and at better efficiency, but in order to implement preventative maintenance, the operators must know minute details of the electrical system, especially some of the harder to assess issues such water-tree. Water-tree induced insulation degradation is a problem typically associated with older cable systems. It is a very high impedance phenomenon and it is difficult to detect using traditional methods such as Tan-Delta or Partial Discharge. The proposed dissertation studies water-tree development in underground cables, potential methods to detect water-tree location and water-tree severity estimation. The dissertation begins by developing mathematical models of water-tree using finite element analysis. The method focuses on surface-originated vented tree, the most prominent type of water-tree fault in the field. Using the standard operation parameters of North American electrical systems, the water-tree boundary conditions are defined. By applying finite element analysis technique, the complex water-tree structure is broken down to homogeneous components. The result is a generalized representation of water-tree capacitance at different stages of development. The result from the finite element analysis is used to model water-tree in large system. Both empirical measurements and the mathematical model show that the impedance of early-stage water-tree is extremely large. As the result, traditional detection methods such Tan-Delta or Partial Discharge are not effective due to the excessively high accuracy requirement. A high-frequency pulse detection method is developed instead. The water-tree impedance is capacitive in nature and it can be reduced to manageable level by high-frequency inputs. The method is able to determine the location of early-stage water-tree in long-distance cables using economically feasible equipment. A pattern recognition method is developed to estimate the severity of water-tree using its pulse response from the high-frequency test method. The early-warning system for water-tree appearance is a tool developed to assist the practical implementation of the high-frequency pulse detection method. Although the equipment used by the detection method is economically feasible, it is still a specialized test and not designed for constant monitoring of the system. The test also place heavy stress on the cable and it is most effective when the cable is taken offline. As the result, utilities need a method to estimate the likelihood of water-tree presence before subjecting the cable to the specialized test. The early-warning system takes advantage of naturally occurring high-frequency events in the system and uses a deviation-comparison method to estimate the probability of water-tree presence on the cable. If the likelihood is high, then the utility can use the high-frequency pulse detection method to obtain accurate results. Specific pulse response patterns can be used to calculate the capacitance of water-tree. The calculated result, however, is subjected to margins of error due to limitations from the real system. There are both long-term and short-term methods to improve the accuracy. Computation algorithm improvement allows immediate improvement on accuracy of the capacitance estimation. The probability distribution of the calculation solution showed that improvements in waveform time-step measurement allow fundamental improves to the overall result.

  19. Uranium resource assessment by the Geological Survey; methodology and plan to update the national resource base

    USGS Publications Warehouse

    Finch, Warren Irvin; McCammon, Richard B.

    1987-01-01

    Based on the Memorandum of Understanding {MOU) of September 20, 1984, between the U.S. Geological Survey of the U.S. Department of Interior and the Energy Information Administration {EIA) of the U.S. Department of Energy {DOE), the U.S. Geological Survey began to make estimates of the undiscovered uranium endowment of selected areas of the United States in 1985. A modified NURE {National Uranium Resource Evaluation) method will be used in place of the standard NURE method of the DOE that was used for the national assessment reported in October 1980. The modified method, here named the 'deposit-size-frequency' {DSF) method, is presented for the first time, and calculations by the two methods are compared using an illustrative example based on preliminary estimates for the first area to be evaluated under the MOU. The results demonstrate that the estimate of the endowment using the DSF method is significantly larger and more uncertain than the estimate obtained by the NURE method. We believe that the DSF method produces a more realistic estimate because the principal factor estimated in the endowment equation is disaggregated into more parts and is more closely tied to specific geologic knowledge than by the NURE method. The DSF method consists of modifying the standard NURE estimation equation, U=AxFxTxG, by replacing the factors FxT by a single factor that represents the tonnage for the total number of deposits in all size classes. Use of the DSF method requires that the size frequency of deposits in a known or control area has been established and that the relation of the size-frequency distribution of deposits to probable controlling geologic factors has been determined. Using these relations, the principal scientist {PS) first estimates the number and range of size classes and then, for each size class, estimates the lower limit, most likely value, and upper limit of the numbers of deposits in the favorable area. Once these probable estimates have been refined by elicitation of the PS, they are entered into the DSF equation, and the probability distribution of estimates of undiscovered uranium endowment is calculated using a slight modification of the program by Ford and McLaren (1980). The EIA study of the viability of the domestic uranium industry requires an annual appraisal of the U.S. uranium resource situation. During DOE's NURE Program, which was terminated in 1983, a thorough assessment of the Nation's resources was completed. A comprehensive reevaluation of uranium resource base for the entire United States is not possible for each annual appraisal. A few areas are in need of future study, however, because of new developments in either scientific knowledge, industry exploration, or both. Four geologic environments have been selected for study by the U.S. Geological Survey in the next several years: (1) surficial uranium deposits throughout the conterminous United States, (2) uranium in collapse-breccia pipes in the Grand Canyon region of Arizona, (3) uranium in Tertiary sedimentary rocks of the Northern Great Plains, and (4) uranium in metamorphic rocks of the Piedmont province in the eastern States. In addition to participation in the National uranium resource assessment, the U.S. Geological Survey will take part in activities of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development and those of the International Atomic Energy Agency.

  20. EEG Characteristic Extraction Method of Listening Music and Objective Estimation Method Based on Latency Structure Model in Individual Characteristics

    NASA Astrophysics Data System (ADS)

    Ito, Shin-Ichi; Mitsukura, Yasue; Nakamura Miyamura, Hiroko; Saito, Takafumi; Fukumi, Minoru

    EEG is characterized by the unique and individual characteristics. Little research has been done to take into account the individual characteristics when analyzing EEG signals. Often the EEG has frequency components which can describe most of the significant characteristics. Then there is the difference of importance between the analyzed frequency components of the EEG. We think that the importance difference shows the individual characteristics. In this paper, we propose a new EEG extraction method of characteristic vector by a latency structure model in individual characteristics (LSMIC). The LSMIC is the latency structure model, which has personal error as the individual characteristics, based on normal distribution. The real-coded genetic algorithms (RGA) are used for specifying the personal error that is unknown parameter. Moreover we propose an objective estimation method that plots the EEG characteristic vector on a visualization space. Finally, the performance of the proposed method is evaluated using a realistic simulation and applied to a real EEG data. The result of our experiment shows the effectiveness of the proposed method.

  1. A comparison of underwater hearing sensitivity in bottlenose dolphins (Tursiops truncatus) determined by electrophysiological and behavioral methods.

    PubMed

    Houser, Dorian S; Finneran, James J

    2006-09-01

    Variable stimulus presentation methods are used in auditory evoked potential (AEP) estimates of cetacean hearing sensitivity, each of which might affect stimulus reception and hearing threshold estimates. This study quantifies differences in underwater hearing thresholds obtained by AEP and behavioral means. For AEP estimates, a transducer embedded in a suction cup (jawphone) was coupled to the dolphin's lower jaw for stimulus presentation. Underwater AEP thresholds were obtained for three dolphins in San Diego Bay and for one dolphin in a quiet pool. Thresholds were estimated from the envelope following response at carrier frequencies ranging from 10 to 150 kHz. One animal, with an atypical audiogram, demonstrated significantly greater hearing loss in the right ear than in the left. Across test conditions, the range and average difference between AEP and behavioral threshold estimates were consistent with published comparisons between underwater behavioral and in-air AEP thresholds. AEP thresholds for one animal obtained in-air and in a quiet pool demonstrated a range of differences of -10 to 9 dB (mean = 3 dB). Results suggest that for the frequencies tested, the presentation of sound stimuli through a jawphone, underwater and in-air, results in acceptable differences to AEP threshold estimates.

  2. Evaluation of a dietary targets monitor.

    PubMed

    Lean, M E J; Anderson, A S; Morrison, C; Currall, J

    2003-05-01

    To evaluate a two-page food frequency list for use as a Dietary Targets Monitor in large scale surveys to quantify consumptions of the key foods groups targeted in health promotion. Intakes of fruit and vegetables, starchy foods and fish estimated from a validated food frequency questionnaire (FFQ) were compared with a short food frequency list (the Dietary Targets Monitor) specifically designed to assess habitual frequency of consumption of foods in relation to dietary targets which form the basis of a National (Scottish) Food and Health Policy. A total of 1085 adults aged 25-64 y from the Glasgow MONICA Study. : The two questionnaires both collected data on frequencies of food consumption for fruit and vegetables, starchy foods and fish. Comparing the two questionnaires, there were consistent biases, best expressed as ratios (FFQ:Dietary Targets Monitor) between the methods for fruit and vegetables (1.33, 95% CI 1.29, 1.38) and 'starchy foods' (1.08, 95% CI 1.05, 1.12), the DTM showing systematic under-reporting by men. For fish consumption, there was essentially no bias between the methods (0.99, 95% CI 0.94, 1.03). Using calibration factors to adjust for biases, the Dietary Targets Monitor indicated that 16% of the subjects were achieving the Scottish Diet food target (400 g/day) for fruit and vegetable consumption. Nearly one-third (32%) of the subjects were eating the recommended intakes of fish (three portions per week). The Dietary Targets Monitor measure of starchy foods consumption was calibrated using FFQ data to be able to make quantitative estimates: 20% of subjects were eating six or more portions of starchy food daily. A similar estimation of total fat intake and saturated fat intake (g/day) allowed the categorization of subjects as low, moderate or high fat consumers, with broad agreement between the methods. The levels of agreement demonstrated by Bland-Altman analysis, were insufficient to permit use of the adjusted DTM to estimate quantitative consumption in smaller subgroups. The Dietary Targets Monitor provides a short, easily administered, dietary assessment tool with the capacity to monitor intakes for changes towards national dietary targets for several key foods and nutrients.

  3. Dispersion curve estimation via a spatial covariance method with ultrasonic wavefield imaging.

    PubMed

    Chong, See Yenn; Todd, Michael D

    2018-05-01

    Numerous Lamb wave dispersion curve estimation methods have been developed to support damage detection and localization strategies in non-destructive evaluation/structural health monitoring (NDE/SHM) applications. In this paper, the covariance matrix is used to extract features from an ultrasonic wavefield imaging (UWI) scan in order to estimate the phase and group velocities of S0 and A0 modes. A laser ultrasonic interrogation method based on a Q-switched laser scanning system was used to interrogate full-field ultrasonic signals in a 2-mm aluminum plate at five different frequencies. These full-field ultrasonic signals were processed in three-dimensional space-time domain. Then, the time-dependent covariance matrices of the UWI were obtained based on the vector variables in Cartesian and polar coordinate spaces for all time samples. A spatial covariance map was constructed to show spatial correlations within the full wavefield. It was observed that the variances may be used as a feature for S0 and A0 mode properties. The phase velocity and the group velocity were found using a variance map and an enveloped variance map, respectively, at five different frequencies. This facilitated the estimation of Lamb wave dispersion curves. The estimated dispersion curves of the S0 and A0 modes showed good agreement with the theoretical dispersion curves. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  5. Method of estimating pulse response using an impedance spectrum

    DOEpatents

    Morrison, John L; Morrison, William H; Christophersen, Jon P; Motloch, Chester G

    2014-10-21

    Electrochemical Impedance Spectrum data are used to predict pulse performance of an energy storage device. The impedance spectrum may be obtained in-situ. A simulation waveform includes a pulse wave with a period greater than or equal to the lowest frequency used in the impedance measurement. Fourier series coefficients of the pulse train can be obtained. The number of harmonic constituents in the Fourier series are selected so as to appropriately resolve the response, but the maximum frequency should be less than or equal to the highest frequency used in the impedance measurement. Using a current pulse as an example, the Fourier coefficients of the pulse are multiplied by the impedance spectrum at corresponding frequencies to obtain Fourier coefficients of the voltage response to the desired pulse. The Fourier coefficients of the response are then summed and reassembled to obtain the overall time domain estimate of the voltage using the Fourier series analysis.

  6. Research on Radar Micro-Doppler Feature Parameter Estimation of Propeller Aircraft

    NASA Astrophysics Data System (ADS)

    He, Zhihua; Tao, Feixiang; Duan, Jia; Luo, Jingsheng

    2018-01-01

    The micro-motion modulation effect of the rotated propellers to radar echo can be a steady feature for aircraft target recognition. Thus, micro-Doppler feature parameter estimation is a key to accurate target recognition. In this paper, the radar echo of rotated propellers is modelled and simulated. Based on which, the distribution characteristics of the micro-motion modulation energy in time, frequency and time-frequency domain are analyzed. The micro-motion modulation energy produced by the scattering points of rotating propellers is accumulated using the Inverse-Radon (I-Radon) transform, which can be used to accomplish the estimation of micro-modulation parameter. Finally, it is proved that the proposed parameter estimation method is effective with measured data. The micro-motion parameters of aircraft can be used as the features of radar target recognition.

  7. The use of historical information for regional frequency analysis of extreme skew surge

    NASA Astrophysics Data System (ADS)

    Frau, Roberto; Andreewsky, Marc; Bernardara, Pietro

    2018-03-01

    The design of effective coastal protections requires an adequate estimation of the annual occurrence probability of rare events associated with a return period up to 103 years. Regional frequency analysis (RFA) has been proven to be an applicable way to estimate extreme events by sorting regional data into large and spatially distributed datasets. Nowadays, historical data are available to provide new insight on past event estimation. The utilisation of historical information would increase the precision and the reliability of regional extreme's quantile estimation. However, historical data are from significant extreme events that are not recorded by tide gauge. They usually look like isolated data and they are different from continuous data from systematic measurements of tide gauges. This makes the definition of the duration of our observations period complicated. However, the duration of the observation period is crucial for the frequency estimation of extreme occurrences. For this reason, we introduced here the concept of credible duration. The proposed RFA method (hereinafter referenced as FAB, from the name of the authors) allows the use of historical data together with systematic data, which is a result of the use of the credible duration concept.

  8. An at-site flood estimation method in the context of nonstationarity I. A simulation study

    NASA Astrophysics Data System (ADS)

    Gado, Tamer A.; Nguyen, Van-Thanh-Van

    2016-04-01

    The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed ;stationary; series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.

  9. Comparing horizontal-to-vertical spectral ratios with sediment-to-bedrock spectral ratios in a region with a thin layer of unconsolidated sediments

    NASA Astrophysics Data System (ADS)

    Schleicher, L.; Pratt, T. L.

    2017-12-01

    Underlying sediment can amplify ground motions during earthquakes, making site response estimates key components in seismic evaluations for building infrastructure. The horizontal-to-vertical spectral ratio (HVSR) method, using either earthquake signals or ambient noise as input, is an appealing method for estimating site response because it uses only a single seismic station rather than requiring two or more seismometers traditionally used to compute a horizontal sediment-to-bedrock spectral ratio (SBSR). A number of studies have had mixed results when comparing the accuracy of the HVSR versus SBSR methods for identifying the frequencies and amplitudes of the primary resonance peaks. Many of these studies have been carried out in areas of complex geology, such as basins with structures that can introduce 3D effects. Here we assess the effectiveness of the HVSR method by a comparison with the SBSR method and models of transfer functions in an area dominated by a flat and thin, unconsolidated sediment layer over bedrock, which should be an ideal setting for using the HVSR method. In this preliminary study, we analyze teleseismic and regional earthquake recordings from a temporary seismometer array deployed throughout Washington, DC, which is underlain by a wedge of 0 to 270 m thick layer of unconsolidated Atlantic Coastal Plain sedimentary strata. At most sites, we find a close match in the amplitudes and frequencies of large resonance peaks in horizontal ground motions at frequencies of 0.7 to 5 Hz in site response estimates using the HVSR and SBSR methods. Amplitudes of the HVSRs tend to be slightly lower than SBSRs at 3 Hz and less, but the amplitudes of the fundamental resonance peaks often match closely. The results suggest that the HVSR method could be a successful approach to consider for computing site response estimates in areas of simple shallow geology consisting of thin sedimentary layers with a strong reflector at the underlying bedrock surface. [This publication represents the views of the authors and does not necessarily represent the views of the Defense Nuclear Facilities Safety Board.

  10. Wavelet-based group and phase velocity measurements: Method

    NASA Astrophysics Data System (ADS)

    Yang, H. Y.; Wang, W. W.; Hung, S. H.

    2016-12-01

    Measurements of group and phase velocities of surface waves are often carried out by applying a series of narrow bandpass or stationary Gaussian filters localized at specific frequencies to wave packets and estimating the corresponding arrival times at the peak envelopes and phases of the Fourier spectra. However, it's known that seismic waves are inherently nonstationary and not well represented by a sum of sinusoids. Alternatively, a continuous wavelet transform (CWT) which decomposes a time series into a family of wavelets, translated and scaled copies of a generally fast oscillating and decaying function known as the mother wavelet, is capable of retaining localization in both the time and frequency domain and well-suited for the time-frequency analysis of nonstationary signals. Here we develop a wavelet-based method to measure frequency-dependent group and phase velocities, an essential dataset used in crust and mantle tomography. For a given time series, we employ the complex morlet wavelet to obtain the scalogram of amplitude modulus |Wg| and phase φ on the time-frequency plane. The instantaneous frequency (IF) is then calculated by taking the derivative of phase with respect to time, i.e., (1/2π)dφ(f, t)/dt. Time windows comprising strong energy arrivals to be measured can be identified by those IFs close to the frequencies with the maximum modulus and varying smoothly and monotonically with time. The respective IFs in each selected time window are further interpolated to yield a smooth branch of ridge points or representative IFs at which the arrival time, tridge(f), and phase, φridge(f), after unwrapping and correcting cycle skipping based on a priori knowledge of the possible velocity range, are determined for group and phase velocity estimation. We will demonstrate our measurement method using both ambient noise cross correlation functions and multi-mode surface waves from earthquakes. The obtained dispersion curves will be compared with those by a conventional narrow bandpass method.

  11. Measuring Ambiguity in HLA Typing Methods

    PubMed Central

    Madbouly, Abeer; Freeman, John; Maiers, Martin

    2012-01-01

    In hematopoietic stem cell transplantation, donor selection is based primarily on matching donor and patient HLA genes. These genes are highly polymorphic and their typing can result in exact allele assignment at each gene (the resolution at which patients and donors are matched), but it can also result in a set of ambiguous assignments, depending on the typing methodology used. To facilitate rapid identification of matched donors, registries employ statistical algorithms to infer HLA alleles from ambiguous genotypes. Linkage disequilibrium information encapsulated in haplotype frequencies is used to facilitate prediction of the most likely haplotype assignment. An HLA typing with less ambiguity produces fewer high-probability haplotypes and a more reliable prediction. We estimated ambiguity for several HLA typing methods across four continental populations using an information theory-based measure, Shannon's entropy. We used allele and haplotype frequencies to calculate entropy for different sets of 1,000 subjects with simulated HLA typing. Using allele frequencies we calculated an average entropy in Caucasians of 1.65 for serology, 1.06 for allele family level, 0.49 for a 2002-era SSO kit, and 0.076 for single-pass SBT. When using haplotype frequencies in entropy calculations, we found average entropies of 0.72 for serology, 0.73 for allele family level, 0.05 for SSO, and 0.002 for single-pass SBT. Application of haplotype frequencies further reduces HLA typing ambiguity. We also estimated expected confirmatory typing mismatch rates for simulated subjects. In a hypothetical registry with all donors typed using the same method, the entropy values based on haplotype frequencies correspond to confirmatory typing mismatch rates of 1.31% for SSO versus only 0.08% for SBT. Intermediate-resolution single-pass SBT contains the least ambiguity of the methods we evaluated and therefore the most certainty in allele prediction. The presented measure objectively evaluates HLA typing methods and can help define acceptable HLA typing for donor recruitment. PMID:22952712

  12. Method and System for Temporal Filtering in Video Compression Systems

    NASA Technical Reports Server (NTRS)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

  13. Multitaper Spectral Analysis and Wavelet Denoising Applied to Helioseismic Data

    NASA Technical Reports Server (NTRS)

    Komm, R. W.; Gu, Y.; Hill, F.; Stark, P. B.; Fodor, I. K.

    1999-01-01

    Estimates of solar normal mode frequencies from helioseismic observations can be improved by using Multitaper Spectral Analysis (MTSA) to estimate spectra from the time series, then using wavelet denoising of the log spectra. MTSA leads to a power spectrum estimate with reduced variance and better leakage properties than the conventional periodogram. Under the assumption of stationarity and mild regularity conditions, the log multitaper spectrum has a statistical distribution that is approximately Gaussian, so wavelet denoising is asymptotically an optimal method to reduce the noise in the estimated spectra. We find that a single m-upsilon spectrum benefits greatly from MTSA followed by wavelet denoising, and that wavelet denoising by itself can be used to improve m-averaged spectra. We compare estimates using two different 5-taper estimates (Stepian and sine tapers) and the periodogram estimate, for GONG time series at selected angular degrees l. We compare those three spectra with and without wavelet-denoising, both visually, and in terms of the mode parameters estimated from the pre-processed spectra using the GONG peak-fitting algorithm. The two multitaper estimates give equivalent results. The number of modes fitted well by the GONG algorithm is 20% to 60% larger (depending on l and the temporal frequency) when applied to the multitaper estimates than when applied to the periodogram. The estimated mode parameters (frequency, amplitude and width) are comparable for the three power spectrum estimates, except for modes with very small mode widths (a few frequency bins), where the multitaper spectra broadened the modest compared with the periodogram. We tested the influence of the number of tapers used and found that narrow modes at low n values are broadened to the extent that they can no longer be fit if the number of tapers is too large. For helioseismic time series of this length and temporal resolution, the optimal number of tapers is less than 10.

  14. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  15. Subjective frequency estimates for 2,938 monosyllabic words.

    PubMed

    Balota, D A; Pilotti, M; Cortese, M J

    2001-06-01

    Subjective frequency estimates for large sample of monosyllabic English words were collected from 574 young adults (undergraduate students) and from a separate group of 1,590 adults of varying ages and educational backgrounds. Estimates from the latter group were collected via the internet. In addition, 90 healthy older adults provided estimates for a random sample of 480 of these words. All groups rated words with respect to the estimated frequency of encounters of each word on a 7-point scale, ranging from never encountered to encountered several times a day. The young and older groups also rated each word with respect to the frequency of encounters in different perceptual domains (e.g., reading, hearing, writing, or speaking). The results of regression analyses indicated that objective log frequency and meaningfulness accounted for most of the variance in subjective frequency estimates, whereas neighborhood size accounted for the least amount of variance in the ratings. The predictive power of log frequency and meaningfulness were dependent on the level of subjective frequency estimates. Meaningfulness was a better predictor of subjective frequency for uncommon words, whereas log frequency was a better predictor of subjective frequency for common words. Our discussion focuses on the utility of subjective frequency estimates compared with other estimates of familiarity. The raw subjective frequency data for all words are available at http://www.artsci.wustl.edu/dbalota/labpub.html.

  16. Accuracy of time-domain and frequency-domain methods used to characterize catchment transit time distributions

    NASA Astrophysics Data System (ADS)

    Godsey, S. E.; Kirchner, J. W.

    2008-12-01

    The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.

  17. Distributed processing of a GPS receiver network for a regional ionosphere map

    NASA Astrophysics Data System (ADS)

    Choi, Kwang Ho; Hoo Lim, Joon; Yoo, Won Jae; Lee, Hyung Keun

    2018-01-01

    This paper proposes a distributed processing method applicable to GPS receivers in a network to generate a regional ionosphere map accurately and reliably. For accuracy, the proposed method is operated by multiple local Kalman filters and Kriging estimators. Each local Kalman filter is applied to a dual-frequency receiver to estimate the receiver’s differential code bias and vertical ionospheric delays (VIDs) at different ionospheric pierce points. The Kriging estimator selects and combines several VID estimates provided by the local Kalman filters to generate the VID estimate at each ionospheric grid point. For reliability, the proposed method uses receiver fault detectors and satellite fault detectors. Each receiver fault detector compares the VID estimates of the same local area provided by different local Kalman filters. Each satellite fault detector compares the VID estimate of each local area with that projected from the other local areas. Compared with the traditional centralized processing method, the proposed method is advantageous in that it considerably reduces the computational burden of each single Kalman filter and enables flexible fault detection, isolation, and reconfiguration capability. To evaluate the performance of the proposed method, several experiments with field collected measurements were performed.

  18. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  19. Resolution of Forces and Strain Measurements from an Acoustic Ground Test

    NASA Technical Reports Server (NTRS)

    Smith, Andrew M.; LaVerde, Bruce T.; Hunt, Ronald; Waldon, James M.

    2013-01-01

    The Conservatism in Typical Vibration Tests was Demonstrated: Vibration test at component level produced conservative force reactions by approximately a factor of 4 (approx.12 dB) as compared to the integrated acoustic test in 2 out of 3 axes. Reaction Forces Estimated at the Base of Equipment Using a Finite Element Based Method were Validated: FEM based estimate of interface forces may be adequate to guide development of vibration test criteria with less conservatism. Element Forces Estimated in Secondary Structure Struts were Validated: Finite element approach provided best estimate of axial strut forces in frequency range below 200 Hz where a rigid lumped mass assumption for the entire electronics box was valid. Models with enough fidelity to represent diminishing apparent mass of equipment are better suited for estimating force reactions across the frequency range. Forward Work: Demonstrate the reduction in conservatism provided by; Current force limited approach and an FEM guided approach. Validate proposed CMS approach to estimate coupled response from uncoupled system characteristics for vibroacoustics.

  20. Copula-based assessment of the relationship between food peaks and flood volumes using information on historical floods by Bayesian Monte Carlo Markov Chain simulations

    NASA Astrophysics Data System (ADS)

    Gaál, Ladislav; Szolgay, Ján.; Bacigál, Tomáå.¡; Kohnová, Silvia

    2010-05-01

    Copula-based estimation methods of hydro-climatological extremes have increasingly been gaining attention of researchers and practitioners in the last couple of years. Unlike the traditional estimation methods which are based on bivariate cumulative distribution functions (CDFs), copulas are a relatively flexible tool of statistics that allow for modelling dependencies between two or more variables such as flood peaks and flood volumes without making strict assumptions on the marginal distributions. The dependence structure and the reliability of the joint estimates of hydro-climatological extremes, mainly in the right tail of the joint CDF not only depends on the particular copula adopted but also on the data available for the estimation of the marginal distributions of the individual variables. Generally, data samples for frequency modelling have limited temporal extent, which is a considerable drawback of frequency analyses in practice. Therefore, it is advised to deal with statistical methods that improve any part of the process of copula construction and result in more reliable design values of hydrological variables. The scarcity of the data sample mostly in the extreme tail of the joint CDF can be bypassed, e.g., by using a considerably larger amount of simulated data by rainfall-runoff analysis or by including historical information on the variables under study. The latter approach of data extension is used here to make the quantile estimates of the individual marginals of the copula more reliable. In the presented paper it is proposed to use historical information in the frequency analysis of the marginal distributions in the framework of Bayesian Monte Carlo Markov Chain (MCMC) simulations. Generally, a Bayesian approach allows for a straightforward combination of different sources of information on floods (e.g. flood data from systematic measurements and historical flood records, respectively) in terms of a product of the corresponding likelihood functions. On the other hand, the MCMC algorithm is a numerical approach for sampling from the likelihood distributions. The Bayesian MCMC methods therefore provide an attractive way to estimate the uncertainty in parameters and quantile metrics of frequency distributions. The applicability of the method is demonstrated in a case study of the hydroelectric power station Orlík on the Vltava River. This site has a key role in the flood prevention of Prague, the capital city of the Czech Republic. The record length of the available flood data is 126 years from the period 1877-2002, while the flood event observed in 2002 that caused extensive damages and numerous casualties is treated as a historic one. To estimate the joint probabilities of flood peaks and volumes, different copulas are fitted and their goodness-of-fit are evaluated by bootstrap simulations. Finally, selected quantiles of flood volumes conditioned on given flood peaks are derived and compared with those obtained by the traditional method used in the practice of water management specialists of the Vltava River.

  1. Estimating the delay-Doppler of target echo in a high clutter underwater environment using wideband linear chirp signals: Evaluation of performance with experimental data.

    PubMed

    Yu, Ge; Yang, T C; Piao, Shengchun

    2017-10-01

    A chirp signal is a signal with linearly varying instantaneous frequency over the signal bandwidth, also known as a linear frequency modulated (LFM) signal. It is widely used in communication, radar, active sonar, and other applications due to its Doppler tolerance property in signal detection using the matched filter (MF) processing. Modern sonar uses high-gain, wideband signals to improve the signal to reverberation ratio. High gain implies a high product of the signal bandwidth and duration. However, wideband and/or long duration LFM signals are no longer Doppler tolerant. The shortcoming of the standard MF processing is loss of performance, and bias in range estimation. This paper uses the wideband ambiguity function and the fractional Fourier transform method to estimate the target velocity and restore the performance. Target velocity or Doppler provides a clue for differentiating the target from the background reverberation and clutter. The methods are applied to simulated and experimental data.

  2. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  3. Tracking Architecture Based on Dual-Filter with State Feedback and Its Application in Ultra-Tight GPS/INS Integration

    PubMed Central

    Zhang, Xi; Miao, Lingjuan; Shao, Haijun

    2016-01-01

    If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper. PMID:27144570

  4. Tracking Architecture Based on Dual-Filter with State Feedback and Its Application in Ultra-Tight GPS/INS Integration.

    PubMed

    Zhang, Xi; Miao, Lingjuan; Shao, Haijun

    2016-05-02

    If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper.

  5. Prevalence and Frequency of Heterosexual Anal Intercourse Among Young People: A Systematic Review and Meta-analysis.

    PubMed

    Owen, Branwen N; Brock, Patrick M; Butler, Ailsa R; Pickles, Michael; Brisson, Marc; Baggaley, Rebecca F; Boily, Marie-Claude

    2015-07-01

    We aim to assess if heterosexual anal intercourse (AI) is commonly practiced and how frequently it is practiced by young people. We searched PubMed for articles published 1975 to July 2014 reporting data on the proportion of young people (mean age <25) practicing heterosexual AI (AI prevalence) and on number of AI acts (AI frequency). Stratified random-effects meta-analysis and meta-regression were used to produce summary estimates and assess the influence of participant and study characteristics on AI prevalence. Eighty-three and thirteen of the 136 included articles reported data on lifetime AI prevalence and monthly AI frequency, respectively. Estimates were heterogenous. Overall summary estimates of lifetime AI prevalence were 22 % (95 % confidence interval 20-24) among sexually active young people, with no statistically significant differences by gender, continent or age. Prevalence increased significantly with confidentiality of interview method and, among males and in Europe, by survey year. Prevalence did not significantly differ by recall period. An estimated 3-24 % of all reported sex acts were AI. Reported heterosexual AI is common but variable among young people worldwide. To fully understand its impact on STI spread, more and better quality data on frequency of unprotected AI, and trends over time are required.

  6. Analysis of fast and slow responses in AC conductance curves for p-type SiC MOS capacitors

    NASA Astrophysics Data System (ADS)

    Karamoto, Yuki; Zhang, Xufang; Okamoto, Dai; Sometani, Mitsuru; Hatakeyama, Tetsuo; Harada, Shinsuke; Iwamuro, Noriyuki; Yano, Hiroshi

    2018-06-01

    We used a conductance method to investigate the interface characteristics of a SiO2/p-type 4H-SiC MOS structure fabricated by dry oxidation. It was found that the measured equivalent parallel conductance–frequency (G p/ω–f) curves were not symmetric, showing that there existed both high- and low-frequency signals. We attributed high-frequency responses to fast interface states and low-frequency responses to near-interface oxide traps. To analyze the fast interface states, Nicollian’s standard conductance method was applied in the high-frequency range. By extracting the high-frequency responses from the measured G p/ω–f curves, the characteristics of the low-frequency responses were reproduced by Cooper’s model, which considers the effect of near-interface traps on the G p/ω–f curves. The corresponding density distribution of slow traps as a function of energy level was estimated.

  7. Change Detection of Remote Sensing Images by Dt-Cwt and Mrf

    NASA Astrophysics Data System (ADS)

    Ouyang, S.; Fan, K.; Wang, H.; Wang, Z.

    2017-05-01

    Aiming at the significant loss of high frequency information during reducing noise and the pixel independence in change detection of multi-scale remote sensing image, an unsupervised algorithm is proposed based on the combination between Dual-tree Complex Wavelet Transform (DT-CWT) and Markov random Field (MRF) model. This method first performs multi-scale decomposition for the difference image by the DT-CWT and extracts the change characteristics in high-frequency regions by using a MRF-based segmentation algorithm. Then our method estimates the final maximum a posterior (MAP) according to the segmentation algorithm of iterative condition model (ICM) based on fuzzy c-means(FCM) after reconstructing the high-frequency and low-frequency sub-bands of each layer respectively. Finally, the method fuses the above segmentation results of each layer by using the fusion rule proposed to obtain the mask of the final change detection result. The results of experiment prove that the method proposed is of a higher precision and of predominant robustness properties.

  8. Dynamic Modeling from Flight Data with Unknown Time Skews

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2016-01-01

    A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.

  9. Stochastic estimation of human arm impedance under nonlinear friction in robot joints: a model study.

    PubMed

    Chang, Pyung Hun; Kang, Sang Hoon

    2010-05-30

    The basic assumption of stochastic human arm impedance estimation methods is that the human arm and robot behave linearly for small perturbations. In the present work, we have identified the degree of influence of nonlinear friction in robot joints to the stochastic human arm impedance estimation. Internal model based impedance control (IMBIC) is then proposed as a means to make the estimation accurate by compensating for the nonlinear friction. From simulations with a nonlinear Lugre friction model, it is observed that the reliability and accuracy of the estimation are severely degraded with nonlinear friction: below 2 Hz, multiple and partial coherence functions are far less than unity; estimated magnitudes and phases are severely deviated from that of a real human arm throughout the frequency range of interest; and the accuracy is not enhanced with an increase of magnitude of the force perturbations. In contrast, the combined use of stochastic estimation and IMBIC provides with accurate estimation results even with large friction: the multiple coherence functions are larger than 0.9 throughout the frequency range of interest and the estimated magnitudes and phases are well matched with that of a real human arm. Furthermore, the performance of suggested method is independent of human arm and robot posture, and human arm impedance. Therefore, the IMBIC will be useful in measuring human arm impedance with conventional robot, as well as in designing a spatial impedance measuring robot, which requires gearing. (c) 2010 Elsevier B.V. All rights reserved.

  10. [Application of the life table method to the estimation of late complications of normal tissues after radiotherapy].

    PubMed

    Morita, K; Uchiyama, Y; Tominaga, S

    1987-06-01

    In order to evaluate the treatment results of radiotherapy it is important to estimate the degree of complications of the surrounding normal tissues as well as the frequency of tumor control. In this report, the cumulative incidence rate of the late radiation injuries of the normal tissues was calculated using the modified actuarial method (Cutler-Ederer's method) or Kaplan-Meier's method, which is usually applied to the calculation of the survival rate. By the use of this method of calculation, an accurate cumulative incidence rate over time can be easily obtained and applied to the statistical evaluation of the late radiation injuries.

  11. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    PubMed

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  12. Some comments on Anderson and Pospahala's correction of bias in line transect sampling

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Chain, B.R.

    1980-01-01

    ANDERSON and POSPAHALA (1970) investigated the estimation of wildlife population size using the belt or line transect sampling method and devised a correction for bias, thus leading to an estimator with interesting characteristics. This work was given a uniform mathematical framework in BURNHAM and ANDERSON (1976). In this paper we show that the ANDERSON-POSPAHALA estimator is optimal in the sense of being the (unique) best linear unbiased estimator within the class of estimators which are linear combinations of cell frequencies, provided certain assumptions are met.

  13. Estimation of the whole-body averaged SAR of grounded human models for plane wave exposure at respective resonance frequencies.

    PubMed

    Hirata, Akimasa; Yanase, Kazuya; Laakso, Ilkka; Chan, Kwok Hung; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi; Conil, Emmanuelle; Wiart, Joe

    2012-12-21

    According to the international guidelines, the whole-body averaged specific absorption rate (WBA-SAR) is used as a metric of basic restriction for radio-frequency whole-body exposure. It is well known that the WBA-SAR largely depends on the frequency of the incident wave for a given incident power density. The frequency at which the WBA-SAR becomes maximal is called the 'resonance frequency'. Our previous study proposed a scheme for estimating the WBA-SAR at this resonance frequency based on an analogy between the power absorption characteristic of human models in free space and that of a dipole antenna. However, a scheme for estimating the WBA-SAR in a grounded human has not been discussed sufficiently, even though the WBA-SAR in a grounded human is larger than that in an ungrounded human. In this study, with the use of the finite-difference time-domain method, the grounded condition is confirmed to be the worst-case exposure for human body models in a standing posture. Then, WBA-SARs in grounded human models are calculated at their respective resonant frequencies. A formula for estimating the WBA-SAR of a human standing on the ground is proposed based on an analogy with a quarter-wavelength monopole antenna. First, homogenized human body models are shown to provide the conservative WBA-SAR as compared with anatomically based models. Based on the formula proposed here, the WBA-SARs in grounded human models are approximately 10% larger than those in free space. The variability of the WBA-SAR was shown to be ±30% even for humans of the same age, which is caused by the body shape.

  14. Mixture distributions of wind speed in the UAE

    NASA Astrophysics Data System (ADS)

    Shin, J.; Ouarda, T.; Lee, T. S.

    2013-12-01

    Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.

  15. Yield and depth Estimation of Selected NTS Nuclear and SPE Chemical Explosions Using Source Equalization by modeling Local and Regional Seismograms (Invited)

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.

    2013-12-01

    Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.

  16. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    NASA Astrophysics Data System (ADS)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).

  17. The Effects of Population Size Histories on Estimates of Selection Coefficients from Time-Series Genetic Data

    PubMed Central

    Jewett, Ethan M.; Steinrücken, Matthias; Song, Yun S.

    2016-01-01

    Many approaches have been developed for inferring selection coefficients from time series data while accounting for genetic drift. These approaches have been motivated by the intuition that properly accounting for the population size history can significantly improve estimates of selective strengths. However, the improvement in inference accuracy that can be attained by modeling drift has not been characterized. Here, by comparing maximum likelihood estimates of selection coefficients that account for the true population size history with estimates that ignore drift by assuming allele frequencies evolve deterministically in a population of infinite size, we address the following questions: how much can modeling the population size history improve estimates of selection coefficients? How much can mis-inferred population sizes hurt inferences of selection coefficients? We conduct our analysis under the discrete Wright–Fisher model by deriving the exact probability of an allele frequency trajectory in a population of time-varying size and we replicate our results under the diffusion model. For both models, we find that ignoring drift leads to estimates of selection coefficients that are nearly as accurate as estimates that account for the true population history, even when population sizes are small and drift is high. This result is of interest because inference methods that ignore drift are widely used in evolutionary studies and can be many orders of magnitude faster than methods that account for population sizes. PMID:27550904

  18. Applying spectral data analysis techniques to aquifer monitoring data in Belvoir Ranch, Wyoming

    NASA Astrophysics Data System (ADS)

    Gao, F.; He, S.; Zhang, Y.

    2017-12-01

    This study uses spectral data analysis techniques to estimate the hydraulic parameters from water level fluctuation due to tide effect and barometric effect. All water level data used in this study are collected in Belvoir Ranch, Wyoming. Tide effect can be not only observed in coastal areas, but also in inland confined aquifers. The force caused by changing positions of sun and moon affects not only ocean but also solid earth. The tide effect has an oscillatory pumping or injection sequence to the aquifer, and can be observed from dense water level monitoring. Belvoir Ranch data are collected once per hour, thus is dense enough to capture the tide effect. First, transforming de-trended data from temporal domain to frequency domain with Fourier transform method. Then, the storage coefficient can be estimated using Bredehoeft-Jacob model. After this, analyze the gain function, which expresses the amplification and attenuation of the output signal, and derive barometric efficiency. Next, find effective porosity with storage coefficient and barometric efficiency with Jacob's model. Finally, estimate aquifer transmissivity and hydraulic conductivity using Paul Hsieh's method. The estimated hydraulic parameters are compared with those from traditional pumping data estimation. This study proves that hydraulic parameter can be estimated by only analyze water level data in frequency domain. It has the advantages of low cost and environmental friendly, thus should be considered for future use of hydraulic parameter estimations.

  19. Assessment of aircraft crash frequency for the Hanford site 200 Area tank farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OBERG, B.D.

    2003-03-22

    Two factors, the near-airport crash frequency and the non-airport crash frequency, enter into the estimate of the annual aircraft crash frequency at a facility. The near-airport activities, Le., takeoffs and landings from any of the airports in a 23-statute-mile (smi) (20-nautical-mile, [nmi]) radius of the facilities, do not significantly contribute to the annual aircraft crash frequency for the 200 Area tank farms. However, using the methods of DOE-STD-3014-96, the total frequency of an aircraft crash for the 200 Area tank farms, all from non-airport operations, is calculated to be 7.10E-6/yr. Thus, DOE-STD-3014-96 requires a consequence analysis for aircraft crash. Thismore » total frequency consists of contributions from general aviation, helicopter activities, commercial air carriers and air taxis, and from large and small military aircraft. The major contribution to this total is from general aviation with a frequency of 6.77E-6/yr. All other types of aircraft have less than 1E-6/yr crash frequencies. The two individual aboveground facilities were in the realm of 1E-7/yr crash frequencies: 204-AR Waste Unloading Facility at 1.56E-7, and 242-T Evaporator at 8.62E-8. DOE-STD-3009-94, ''Preparation Guide for U.S. Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses'', states that external events, such as aircraft crashes, are referred to as design basis accidents (DBA) and analyzed as such: ''if frequency of occurrence is estimated to exceed 10{sup -6}/yr conservatively calculated'' DOE-STD-3014-96 considers its method for estimating aircraft crash frequency as being conservative. Therefore, DOE-STD-3009-94 requires DBA analysis of an aircraft crash into the 200 Area tank farms. DOE-STD-3009-94 also states that beyond-DBAs are not evaluated for external events. Thus, it requires only a DBA analysis of the effects of an aircraft crash into the 200 Area tank farms. There are two attributes of an aircraft crash into a Hanford waste storage tank, which produce radiological and toxicological effects: the physical-crash, tank-dome-collapse activity, and the ensuing fire from the broken-up fuel.« less

  20. Multisensor signal denoising based on matching synchrosqueezing wavelet transform for mechanical fault condition assessment

    NASA Astrophysics Data System (ADS)

    Yi, Cancan; Lv, Yong; Xiao, Han; Huang, Tao; You, Guanghui

    2018-04-01

    Since it is difficult to obtain the accurate running status of mechanical equipment with only one sensor, multisensor measurement technology has attracted extensive attention. In the field of mechanical fault diagnosis and condition assessment based on vibration signal analysis, multisensor signal denoising has emerged as an important tool to improve the reliability of the measurement result. A reassignment technique termed the synchrosqueezing wavelet transform (SWT) has obvious superiority in slow time-varying signal representation and denoising for fault diagnosis applications. The SWT uses the time-frequency reassignment scheme, which can provide signal properties in 2D domains (time and frequency). However, when the measured signal contains strong noise components and fast varying instantaneous frequency, the performance of SWT-based analysis still depends on the accuracy of instantaneous frequency estimation. In this paper, a matching synchrosqueezing wavelet transform (MSWT) is investigated as a potential candidate to replace the conventional synchrosqueezing transform for the applications of denoising and fault feature extraction. The improved technology utilizes the comprehensive instantaneous frequency estimation by chirp rate estimation to achieve a highly concentrated time-frequency representation so that the signal resolution can be significantly improved. To exploit inter-channel dependencies, the multisensor denoising strategy is performed by using a modulated multivariate oscillation model to partition the time-frequency domain; then, the common characteristics of the multivariate data can be effectively identified. Furthermore, a modified universal threshold is utilized to remove noise components, while the signal components of interest can be retained. Thus, a novel MSWT-based multisensor signal denoising algorithm is proposed in this paper. The validity of this method is verified by numerical simulation, and experiments including a rolling bearing system and a gear system. The results show that the proposed multisensor matching synchronous squeezing wavelet transform (MMSWT) is superior to existing methods.

Top