A straightforward frequency-estimation technique for GPS carrier-phase time transfer.
Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen
2006-09-01
Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).
A phase match based frequency estimation method for sinusoidal signals
NASA Astrophysics Data System (ADS)
Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao
2015-04-01
Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.
Oberg, Kevin A.; Mades, Dean M.
1987-01-01
Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)
Rapid estimation of frequency response functions by close-range photogrammetry
NASA Technical Reports Server (NTRS)
Tripp, J. S.
1985-01-01
The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod
2010-06-01
Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.
An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process
NASA Technical Reports Server (NTRS)
Carter, M. C.; Madison, M. W.
1973-01-01
The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.
Interpretation of the instantaneous frequency of phonocardiogram signals
NASA Astrophysics Data System (ADS)
Rey, Alexis B.
2005-06-01
Short-Time Fourier transforms, Wigner-Ville distribution, and Wavelet Transforms have been commonly used when dealing with non-stationary signals, and they have been known as time-frequency distributions. Also, it is commonly intended to investigate the behaviour of phonocardiogram signals as a means of prediction some oh the pathologies of the human hart. For this, this paper aims to analyze the relationship between the instantaneous frequency of a PCG signal and the so-mentioned time-frequency distributions; three algorithms using Matlab functions have been developed: the first one, the estimation of the IF using the normalized linear moment, the second one, the estimation of the IF using the periodic first moment, and the third one, the computing of the WVD. Meanwhile, the computing of the STFT spectrogram is carried out with a Matlab function. Several simulations of the spectrogram for a set of PCG signals and the estimation of the IF are shown, and its relationship is validated through correlation. Finally, the second algorithm is a better choice because the estimation is not biased, whereas the WVD is very computing-demanding and offers no benefit since the estimation of the IF by using this TFD has an equivalent result when using the derivative of the phase of the analytic signal, which is also less computing-demanding.
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886
Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
NASA Technical Reports Server (NTRS)
Klein, V.
1980-01-01
A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.
Olson, Scott A.; Tasker, Gary D.; Johnston, Craig M.
2003-01-01
Estimates of the magnitude and frequency of streamflow are needed to safely and economically design bridges, culverts, and other structures in or near streams. These estimates also are used for managing floodplains, identifying flood-hazard areas, and establishing flood-insurance rates, but may be required at ungaged sites where no observed flood data are available for streamflow-frequency analysis. This report describes equations for estimating flow-frequency characteristics at ungaged, unregulated streams in Vermont. In the past, regression equations developed to estimate streamflow statistics required users to spend hours manually measuring basin characteristics for the stream site of interest. This report also describes the accompanying customized geographic information system (GIS) tool that automates the measurement of basin characteristics and calculation of corresponding flow statistics. The tool includes software that computes the accuracy of the results and adjustments for expected probability and for streamflow data of a nearby stream-gaging station that is either upstream or downstream and within 50 percent of the drainage area of the site where the flow-frequency characteristics are being estimated. The custom GIS can be linked to the National Flood Frequency program, adding the ability to plot peak-flow-frequency curves and synthetic hydrographs and to compute adjustments for urbanization.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
Betowski, Don; Bevington, Charles; Allison, Thomas C
2016-01-19
Halogenated chemical substances are used in a broad array of applications, and new chemical substances are continually being developed and introduced into commerce. While recent research has considerably increased our understanding of the global warming potentials (GWPs) of multiple individual chemical substances, this research inevitably lags behind the development of new chemical substances. There are currently over 200 substances known to have high GWP. Evaluation of schemes to estimate radiative efficiency (RE) based on computational chemistry are useful where no measured IR spectrum is available. This study assesses the reliability of values of RE calculated using computational chemistry techniques for 235 chemical substances against the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models, and reasonable agreement with reported values is found. Significant improvement is obtained through scaling of both vibrational frequencies and intensities. The effect of varying the computational method and basis set used to calculate the frequency data is discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed RE values.
Power system frequency estimation based on an orthogonal decomposition method
NASA Astrophysics Data System (ADS)
Lee, Chih-Hung; Tsai, Men-Shen
2018-06-01
In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.
Fast focus estimation using frequency analysis in digital holography.
Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung
2014-11-17
A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.
Accuracy and Precision of USNO GPS Carrier-Phase Time Transfer
2010-01-01
values. Comparison measures used include estimates obtained from two-way satellite time/frequency transfer ( TWSTFT ), and GPS-based estimates obtained...the IGS are used as a benchmark in the computation. Frequency values have a few times 10 -15 fractional frequency uncertainty. TWSTFT values confirm...obtained from two-way satellite time/frequency transfer ( TWSTFT ), BIPM Circular T, and the International GNSS Service (IGS). At present, it is known that
Slade, R.M.; Asquith, W.H.
1996-01-01
About 23,000 annual peak streamflows and about 400 historical peak streamflows exist for about 950 stations in the surface-water data-collection network of Texas. These data are presented on a computer diskette along with the corresponding dates, gage heights, and information concerning the basin, and nature or cause for the flood. Also on the computer diskette is a U.S. Geological Survey computer program that estimates peak-streamflow frequency based on annual and historical peak streamflow. The program estimates peak streamflow for 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals and is based on guidelines established by the Interagency Advisory Committee on Water Data. Explanations are presented for installing the program, and an example is presented with discussion of its options.
Methods for estimating magnitude and frequency of peak flows for natural streams in Utah
Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.
2007-01-01
Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.
Method of estimating flood-frequency parameters for streams in Idaho
Kjelstrom, L.C.; Moffatt, R.L.
1981-01-01
Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)
NASA Astrophysics Data System (ADS)
Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.
Estimating population diversity with CatchAll
Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.
2012-01-01
Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246
Understanding auditory distance estimation by humpback whales: a computational approach.
Mercado, E; Green, S R; Schneider, J N
2008-02-01
Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.
Regularization of Instantaneous Frequency Attribute Computations
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
Lind, Greg D.; Stonewall, Adam J.
2018-02-13
In this study, “naturalized” daily streamflow records, created by the U.S. Army Corps of Engineers and the Bureau of Reclamation, were used to compute 1-, 3-, 7-, 10-, 15-, 30-, and 60-day annual maximum streamflow durations, which are running averages of daily streamflow for the number of days in each duration. Once the annual maximum durations were computed, the floodduration frequencies could be estimated. The estimated flood-duration frequencies correspond to the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent probabilities of their occurring or being exceeded each year. For this report, the focus was on the Willamette River Basin in Oregon, which is a subbasin of the Columbia River Basin. This study is part of a larger one encompassing the entire Columbia Basin.
Speed-dependent collision effects on radar back-scattering from the ionosphere
NASA Technical Reports Server (NTRS)
Theimer, O.
1981-01-01
A computer code to accurately compute the fluctuation spectrum for linearly speed dependent collision frequencies was developed. The effect of ignoring the speed dependence on the estimates of ionospheric parameters was determined. It is shown that disagreements between the rocket and the incoherent scatter estimates could be partially resolved if the correct speed dependence of the i-n collision frequency is not ignored. This problem is also relevant to the study of ionospheric irregularities in the auroral E-region and their effects on the radio communication with satellites.
NASA Astrophysics Data System (ADS)
Pfister, Olivier
2017-05-01
When it comes to practical quantum computing, the two main challenges are circumventing decoherence (devastating quantum errors due to interactions with the environmental bath) and achieving scalability (as many qubits as needed for a real-life, game-changing computation). We show that using, in lieu of qubits, the "qumodes" represented by the resonant fields of the quantum optical frequency comb of an optical parametric oscillator allows one to create bona fide, large scale quantum computing processors, pre-entangled in a cluster state. We detail our recent demonstration of 60-qumode entanglement (out of an estimated 3000) and present an extension to combining this frequency-tagged with time-tagged entanglement, in order to generate an arbitrarily large, universal quantum computing processor.
Radar modulation classification using time-frequency representation and nonlinear regression
NASA Astrophysics Data System (ADS)
De Luigi, Christophe; Arques, Pierre-Yves; Lopez, Jean-Marc; Moreau, Eric
1999-09-01
In naval electronic environment, pulses emitted by radars are collected by ESM receivers. For most of them the intrapulse signal is modulated by a particular law. To help the classical identification process, a classification and estimation of this modulation law is applied on the intrapulse signal measurements. To estimate with a good accuracy the time-varying frequency of a signal corrupted by an additive noise, one method has been chosen. This method consists on the Wigner distribution calculation, the instantaneous frequency is then estimated by the peak location of the distribution. Bias and variance of the estimator are performed by computed simulations. In a estimated sequence of frequencies, we assume the presence of false and good estimated ones, the hypothesis of Gaussian distribution is made on the errors. A robust non linear regression method, based on the Levenberg-Marquardt algorithm, is thus applied on these estimated frequencies using a Maximum Likelihood Estimator. The performances of the method are tested by using varied modulation laws and different signal to noise ratios.
NASA Technical Reports Server (NTRS)
Eren, K.
1980-01-01
The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.
Ries(compiler), Kernell G.; With sections by Atkins, J. B.; Hummel, P.R.; Gray, Matthew J.; Dusenbury, R.; Jennings, M.E.; Kirby, W.H.; Riggs, H.C.; Sauer, V.B.; Thomas, W.O.
2007-01-01
The National Streamflow Statistics (NSS) Program is a computer program that should be useful to engineers, hydrologists, and others for planning, management, and design applications. NSS compiles all current U.S. Geological Survey (USGS) regional regression equations for estimating streamflow statistics at ungaged sites in an easy-to-use interface that operates on computers with Microsoft Windows operating systems. NSS expands on the functionality of the USGS National Flood Frequency Program, and replaces it. The regression equations included in NSS are used to transfer streamflow statistics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, the equations were developed on a statewide or metropolitan-area basis as part of cooperative study programs. Equations are available for estimating rural and urban flood-frequency statistics, such as the 1 00-year flood, for every state, for Puerto Rico, and for the island of Tutuila, American Samoa. Equations are available for estimating other statistics, such as the mean annual flow, monthly mean flows, flow-duration percentiles, and low-flow frequencies (such as the 7-day, 0-year low flow) for less than half of the states. All equations available for estimating streamflow statistics other than flood-frequency statistics assume rural (non-regulated, non-urbanized) conditions. The NSS output provides indicators of the accuracy of the estimated streamflow statistics. The indicators may include any combination of the standard error of estimate, the standard error of prediction, the equivalent years of record, or 90 percent prediction intervals, depending on what was provided by the authors of the equations. The program includes several other features that can be used only for flood-frequency estimation. These include the ability to generate flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals, estimates of the probable maximum flood, extrapolation of the 500-year flood when an equation for estimating it is not available, and weighting techniques to improve flood-frequency estimates for gaging stations and ungaged sites on gaged streams. This report describes the regionalization techniques used to develop the equations in NSS and provides guidance on the applicability and limitations of the techniques. The report also includes a users manual and a summary of equations available for estimating basin lagtime, which is needed by the program to generate flood hydrographs. The NSS software and accompanying database, and the documentation for the regression equations included in NSS, are available on the Web at http://water.usgs.gov/software/.
Method and system for efficient video compression with low-complexity encoder
NASA Technical Reports Server (NTRS)
Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)
2012-01-01
Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.
Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.
NASA Astrophysics Data System (ADS)
Wang, Avery Li-Chun
This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters which require a small fraction of the computational power of conventional FIR implementations. This design strategy is based on truncated and stabilized IIR filters. These signal-processing methods have been applied to the problem of auditory source separation, resulting in voice separation from complex music that is significantly better than previous results at far lower computational cost.
2015-01-01
The recent availability of high frequency data has permitted more efficient ways of computing volatility. However, estimation of volatility from asset price observations is challenging because observed high frequency data are generally affected by noise-microstructure effects. We address this issue by using the Fourier estimator of instantaneous volatility introduced in Malliavin and Mancino 2002. We prove a central limit theorem for this estimator with optimal rate and asymptotic variance. An extensive simulation study shows the accuracy of the spot volatility estimates obtained using the Fourier estimator and its robustness even in the presence of different microstructure noise specifications. An empirical analysis on high frequency data (U.S. S&P500 and FIB 30 indices) illustrates how the Fourier spot volatility estimates can be successfully used to study intraday variations of volatility and to predict intraday Value at Risk. PMID:26421617
NASA Technical Reports Server (NTRS)
Reddy, C. J.; Deshpande, M. D.; Cockrell, C. R.; Beck, F. B.
2004-01-01
The hybrid Finite Element Method(FEM)/Method of Moments(MoM) technique has become popular over the last few years due to its flexibility to handle arbitrarily shaped objects with complex materials. One of the disadvantages of this technique, however, is the computational cost involved in obtaining solutions over a frequency range as computations are repeated for each frequency. In this paper, the application of Model Based Parameter Estimation (MBPE) method[1] with the hybrid FEM/MoM technique is presented for fast computation of frequency response of cavity-backed apertures[2,3]. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency-derivatives of the integro-differential equation formed by the hybrid FEM/MoM technique. Using the rational function approximation, the electric field is calculated at different frequencies from which the frequency response is obtained.
Gotvald, Anthony J.; Barth, Nancy A.; Veilleux, Andrea G.; Parrett, Charles
2012-01-01
Methods for estimating the magnitude and frequency of floods in California that are not substantially affected by regulation or diversions have been updated. Annual peak-flow data through water year 2006 were analyzed for 771 streamflow-gaging stations (streamgages) in California having 10 or more years of data. Flood-frequency estimates were computed for the streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Low-outlier and historic information were incorporated into the flood-frequency analysis, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low outliers. Special methods for fitting the distribution were developed for streamgages in the desert region in southeastern California. Additionally, basin characteristics for the streamgages were computed by using a geographical information system. Regional regression analysis, using generalized least squares regression, was used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins in California that are outside of the southeastern desert region. Flood-frequency estimates and basin characteristics for 630 streamgages were combined to form the final database used in the regional regression analysis. Five hydrologic regions were developed for the area of California outside of the desert region. The final regional regression equations are functions of drainage area and mean annual precipitation for four of the five regions. In one region, the Sierra Nevada region, the final equations are functions of drainage area, mean basin elevation, and mean annual precipitation. Average standard errors of prediction for the regression equations in all five regions range from 42.7 to 161.9 percent. For the desert region of California, an analysis of 33 streamgages was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the log-Pearson Type III distribution. The regional estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final regional regression equations are functions of drainage area. Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent. Annual peak-flow data through water year 2006 were analyzed for eight streamgages in California having 10 or more years of data considered to be affected by urbanization. Flood-frequency estimates were computed for the urban streamgages by fitting a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Regression analysis could not be used to develop flood-frequency estimation equations for urban streams because of the limited number of sites. Flood-frequency estimates for the eight urban sites were graphically compared to flood-frequency estimates for 630 non-urban sites. The regression equations developed from this study will be incorporated into the U.S. Geological Survey (USGS) StreamStats program. The StreamStats program is a Web-based application that provides streamflow statistics and basin characteristics for USGS streamgages and ungaged sites of interest. StreamStats can also compute basin characteristics and provide estimates of streamflow statistics for ungaged sites when users select the location of a site along any stream in California.
Feaster, Toby D.; Tasker, Gary D.
2002-01-01
Data from 167 streamflow-gaging stations in or near South Carolina with 10 or more years of record through September 30, 1999, were used to develop two methods for estimating the magnitude and frequency of floods in South Carolina for rural ungaged basins that are not significantly affected by regulation. Flood frequency estimates for 54 gaged sites in South Carolina were computed by fitting the water-year peak flows for each site to a log-Pearson Type III distribution. As part of the computation of flood-frequency estimates for gaged sites, new values for generalized skew coefficients were developed. Flood-frequency analyses also were made for gaging stations that drain basins from more than one physiographic province. The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, updated these data from previous flood-frequency reports to aid officials who are active in floodplain management as well as those who design bridges, culverts, and levees, or other structures near streams where flooding is likely to occur. Regional regression analysis, using generalized least squares regression, was used to develop a set of predictive equations that can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for rural ungaged basins in the Blue Ridge, Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The predictive equations are all functions of drainage area. Average errors of prediction for these regression equations ranged from -16 to 19 percent for the 2-year recurrence-interval flow in the upper Coastal Plain to -34 to 52 percent for the 500-year recurrence interval flow in the lower Coastal Plain. A region-of-influence method also was developed that interactively estimates recurrence- interval flows for rural ungaged basins in the Blue Ridge of South Carolina. The region-of-influence method uses regression techniques to develop a unique relation between flow and basin characteristics for an individual watershed. This, then, can be used to estimate flows at ungaged sites. Because the computations required for this method are somewhat complex, a computer application was developed that performs the computations and compares the predictive errors for this method. The computer application includes the option of using the region-of-influence method, or the generalized least squares regression equations from this report to compute estimated flows and errors of prediction specific to each ungaged site. From a comparison of predictive errors using the region-of-influence method with those computed using the regional regression method, the region-of-influence method performed systematically better only in the Blue Ridge and is, therefore, not recommended for use in the other physiographic provinces. Peak-flow data for the South Carolina stations used in the regionalization study are provided in appendix A, which contains gaging station information, log-Pearson Type III statistics, information on stage-flow relations, and water-year peak stages and flows. For informational purposes, water-year peak-flow data for stations on regulated streams in South Carolina also are provided in appendix D. Other information pertaining to the regulated streams is provided in the text of the report.
Transfer-function-parameter estimation from frequency response data: A FORTRAN program
NASA Technical Reports Server (NTRS)
Seidel, R. C.
1975-01-01
A FORTRAN computer program designed to fit a linear transfer function model to given frequency response magnitude and phase data is presented. A conjugate gradient search is used that minimizes the integral of the absolute value of the error squared between the model and the data. The search is constrained to insure model stability. A scaling of the model parameters by their own magnitude aids search convergence. Efficient computer algorithms result in a small and fast program suitable for a minicomputer. A sample problem with different model structures and parameter estimates is reported.
Adaptive OFDM Radar Waveform Design for Improved Micro-Doppler Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
Here we analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a rotating target having multiple scattering centers. The use of a frequency-diverse OFDM signal enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. We characterize the accuracy of micro-Doppler frequency estimation by computing the Cramer-Rao bound (CRB) on the angular-velocity estimate of the target. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to themore » OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations with respect to the signal-to-noise ratios, number of temporal samples, and number of OFDM subcarriers. We also analysed numerically the improvement in estimation accuracy due to the adaptive waveform design. A grid-based maximum likelihood estimation technique is applied to evaluate the corresponding mean-squared error performance.« less
Online frequency estimation with applications to engine and generator sets
NASA Astrophysics Data System (ADS)
Manngård, Mikael; Böling, Jari M.
2017-07-01
Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.
NASA Technical Reports Server (NTRS)
Reddy C. J.
1998-01-01
Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.
High-frequency signal and noise estimates of CSR GRACE RL04
NASA Astrophysics Data System (ADS)
Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.
2012-12-01
A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.
A proposed technique for the Venus balloon telemetry and Doppler frequency recovery
NASA Technical Reports Server (NTRS)
Jurgens, R. F.; Divsalar, D.
1985-01-01
A technique is proposed to accurately estimate the Doppler frequency and demodulate the digitally encoded telemetry signal that contains the measurements from balloon instruments. Since the data are prerecorded, one can take advantage of noncausal estimators that are both simpler and more computationally efficient than the usual closed-loop or real-time estimators for signal detection and carrier tracking. Algorithms for carrier frequency estimation subcarrier demodulation, bit and frame synchronization are described. A Viterbi decoder algorithm using a branch indexing technique has been devised to decode constraint length 6, rate 1/2 convolutional code that is being used by the balloon transmitter. These algorithms are memory efficient and can be implemented on microcomputer systems.
Principal axes estimation using the vibration modes of physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2008-06-01
This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.
ERIC Educational Resources Information Center
Nelson, Peter M.; Van Norman, Ethan R.; Klingbeil, Dave A.; Parker, David C.
2017-01-01
Although extensive research exists on the use of curriculum-based measures for progress monitoring, little is known about using computer adaptive tests (CATs) for progress-monitoring purposes. The purpose of this study was to evaluate the impact of the frequency of data collection on individual and group growth estimates using a CAT. Data were…
2010-03-01
uses all available resources in some optimized manner. By further exploiting the design flexibility and computational efficiency of Orthogonal Frequency...in the following sections. 3.2.1 Estimation of PU Signal Statistics. The Estimate PU Signal Statis- tics function of Fig 3.4 is used to compute the...consecutive PU transmissions, and 4) the probability of transitioning from one transmission state to another. These statistics are then used to compute the
The set of commercially available chemical substances in commerce that may have significant global warming potential (GWP) is not well defined. Although there are currently over 200 chemicals with high GWP reported by the Intergovernmental Panel on Climate Change, World Meteorological Organization, or Environmental Protection Agency, there may be hundreds of additional chemicals that may also have significant GWP. Evaluation of various approaches to estimate radiative efficiency (RE) and atmospheric lifetime will help to refine GWP estimates for compounds where no measured IR spectrum is available. This study compares values of RE calculated using computational chemistry techniques for 235 chemical compounds against the best available values. It is important to assess the reliability of the underlying computational methods for computing RE to understand the sources of deviations from the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models. The values derived using these models are found to be in reasonable agreement with reported RE values (though significant improvement is obtained through scaling). The effect of varying the computational method and basis set used to calculate the frequency data is also discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed values of RE in this study. Deviations of
Time-frequency domain SNR estimation and its application in seismic data processing
NASA Astrophysics Data System (ADS)
Zhao, Yan; Liu, Yang; Li, Xuxuan; Jiang, Nansen
2014-08-01
Based on an approach estimating frequency domain signal-to-noise ratio (FSNR), we propose a method to evaluate time-frequency domain signal-to-noise ratio (TFSNR). This method adopts short-time Fourier transform (STFT) to estimate instantaneous power spectrum of signal and noise, and thus uses their ratio to compute TFSNR. Unlike FSNR describing the variation of SNR with frequency only, TFSNR depicts the variation of SNR with time and frequency, and thus better handles non-stationary seismic data. By considering TFSNR, we develop methods to improve the effects of inverse Q filtering and high frequency noise attenuation in seismic data processing. Inverse Q filtering considering TFSNR can better solve the problem of amplitude amplification of noise. The high frequency noise attenuation method considering TFSNR, different from other de-noising methods, distinguishes and suppresses noise using an explicit criterion. Examples of synthetic and real seismic data illustrate the correctness and effectiveness of the proposed methods.
Generalized entropies and the similarity of texts
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; Dias, Laércio; Gerlach, Martin
2017-01-01
We show how generalized Gibbs-Shannon entropies can provide new insights on the statistical properties of texts. The universal distribution of word frequencies (Zipf’s law) implies that the generalized entropies, computed at the word level, are dominated by words in a specific range of frequencies. Here we show that this is the case not only for the generalized entropies but also for the generalized (Jensen-Shannon) divergences, used to compute the similarity between different texts. This finding allows us to identify the contribution of specific words (and word frequencies) for the different generalized entropies and also to estimate the size of the databases needed to obtain a reliable estimation of the divergences. We test our results in large databases of books (from the google n-gram database) and scientific papers (indexed by Web of Science).
RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.
Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio
Koltun, G.F.
2003-01-01
Regional equations for estimating 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood-peak discharges at ungaged sites on rural, unregulated streams in Ohio were developed by means of ordinary and generalized least-squares (GLS) regression techniques. One-variable, simple equations and three-variable, full-model equations were developed on the basis of selected basin characteristics and flood-frequency estimates determined for 305 streamflow-gaging stations in Ohio and adjacent states. The average standard errors of prediction ranged from about 39 to 49 percent for the simple equations, and from about 34 to 41 percent for the full-model equations. Flood-frequency estimates determined by means of log-Pearson Type III analyses are reported along with weighted flood-frequency estimates, computed as a function of the log-Pearson Type III estimates and the regression estimates. Values of explanatory variables used in the regression models were determined from digital spatial data sets by means of a geographic information system (GIS), with the exception of drainage area, which was determined by digitizing the area within basin boundaries manually delineated on topographic maps. Use of GIS-based explanatory variables represents a major departure in methodology from that described in previous reports on estimating flood-frequency characteristics of Ohio streams. Examples are presented illustrating application of the regression equations to ungaged sites on ungaged and gaged streams. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site on the same stream. A region-of-influence method, which employs a computer program to estimate flood-frequency characteristics for ungaged sites based on data from gaged sites with similar characteristics, was also tested and compared to the GLS full-model equations. For all recurrence intervals, the GLS full-model equations had superior prediction accuracy relative to the simple equations and therefore are recommended for use.
NASA Astrophysics Data System (ADS)
Cara, Javier
2016-05-01
Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Tutorial: Asteroseismic Stellar Modelling with AIMS
NASA Astrophysics Data System (ADS)
Lund, Mikkel N.; Reese, Daniel R.
The goal of aims (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm—interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. aims is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.
ERIC Educational Resources Information Center
Herdagdelen, Amaç; Marelli, Marco
2017-01-01
Corpus-based word frequencies are one of the most important predictors in language processing tasks. Frequencies based on conversational corpora (such as movie subtitles) are shown to better capture the variance in lexical decision tasks compared to traditional corpora. In this study, we show that frequencies computed from social media are…
Computation of rainfall erosivity from daily precipitation amounts.
Beguería, Santiago; Serrano-Notivoli, Roberto; Tomas-Burguera, Miquel
2018-10-01
Rainfall erosivity is an important parameter in many erosion models, and the EI30 defined by the Universal Soil Loss Equation is one of the best known erosivity indices. One issue with this and other erosivity indices is that they require continuous breakpoint, or high frequency time interval, precipitation data. These data are rare, in comparison to more common medium-frequency data, such as daily precipitation data commonly recorded by many national and regional weather services. Devising methods for computing estimates of rainfall erosivity from daily precipitation data that are comparable to those obtained by using high-frequency data is, therefore, highly desired. Here we present a method for producing such estimates, based on optimal regression tools such as the Gamma Generalised Linear Model and universal kriging. Unlike other methods, this approach produces unbiased and very close to observed EI30, especially when these are aggregated at the annual level. We illustrate the method with a case study comprising more than 1500 high-frequency precipitation records across Spain. Although the original records have a short span (the mean length is around 10 years), computation of spatially-distributed upscaling parameters offers the possibility to compute high-resolution climatologies of the EI30 index based on currently available, long-span, daily precipitation databases. Copyright © 2018 Elsevier B.V. All rights reserved.
Interactive rendering of acquired materials on dynamic geometry using frequency analysis.
Bagher, Mahdi Mohammad; Soler, Cyril; Subr, Kartic; Belcour, Laurent; Holzschuch, Nicolas
2013-05-01
Shading acquired materials with high-frequency illumination is computationally expensive. Estimating the shading integral requires multiple samples of the incident illumination. The number of samples required may vary across the image, and the image itself may have high- and low-frequency variations, depending on a combination of several factors. Adaptively distributing computational budget across the pixels for shading is a challenging problem. In this paper, we depict complex materials such as acquired reflectances, interactively, without any precomputation based on geometry. In each frame, we first estimate the frequencies in the local light field arriving at each pixel, as well as the variance of the shading integrand. Our frequency analysis accounts for combinations of a variety of factors: the reflectance of the object projecting to the pixel, the nature of the illumination, the local geometry and the camera position relative to the geometry and lighting. We then exploit this frequency information (bandwidth and variance) to adaptively sample for reconstruction and integration. For example, fewer pixels per unit area are shaded for pixels projecting onto diffuse objects, and fewer samples are used for integrating illumination incident on specular objects.
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.
An LFMCW detector with new structure and FRFT based differential distance estimation method.
Yue, Kai; Hao, Xinhong; Li, Ping
2016-01-01
This paper describes a linear frequency modulated continuous wave (LFMCW) detector which is designed for a collision avoidance radar. This detector can estimate distance between the detector and pedestrians or vehicles, thereby it will help to reduce the likelihood of traffic accidents. The detector consists of a transceiver and a signal processor. A novel structure based on the intermediate frequency signal (IFS) is designed for the transceiver which is different from the traditional LFMCW transceiver using the beat frequency signal (BFS) based structure. In the signal processor, a novel fractional Fourier transform (FRFT) based differential distance estimation (DDE) method is used to detect the distance. The new IFS based structure is beneficial for the FRFT based DDE method to reduce the computation complexity, because it does not need the scan of the optimal FRFT order. Low computation complexity ensures the feasibility of practical applications. Simulations are carried out and results demonstrate the efficiency of the detector designed in this paper.
Curran, Janet H.; Barth, Nancy A.; Veilleux, Andrea G.; Ourso, Robert T.
2016-03-16
Estimates of the magnitude and frequency of floods are needed across Alaska for engineering design of transportation and water-conveyance structures, flood-insurance studies, flood-plain management, and other water-resource purposes. This report updates methods for estimating flood magnitude and frequency in Alaska and conterminous basins in Canada. Annual peak-flow data through water year 2012 were compiled from 387 streamgages on unregulated streams with at least 10 years of record. Flood-frequency estimates were computed for each streamgage using the Expected Moments Algorithm to fit a Pearson Type III distribution to the logarithms of annual peak flows. A multiple Grubbs-Beck test was used to identify potentially influential low floods in the time series of peak flows for censoring in the flood frequency analysis.For two new regional skew areas, flood-frequency estimates using station skew were computed for stations with at least 25 years of record for use in a Bayesian least-squares regression analysis to determine a regional skew value. The consideration of basin characteristics as explanatory variables for regional skew resulted in improvements in precision too small to warrant the additional model complexity, and a constant model was adopted. Regional Skew Area 1 in eastern-central Alaska had a regional skew of 0.54 and an average variance of prediction of 0.45, corresponding to an effective record length of 22 years. Regional Skew Area 2, encompassing coastal areas bordering the Gulf of Alaska, had a regional skew of 0.18 and an average variance of prediction of 0.12, corresponding to an effective record length of 59 years. Station flood-frequency estimates for study sites in regional skew areas were then recomputed using a weighted skew incorporating the station skew and regional skew. In a new regional skew exclusion area outside the regional skew areas, the density of long-record streamgages was too sparse for regional analysis and station skew was used for all estimates. Final station flood frequency estimates for all study streamgages are presented for the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities.Regional multiple-regression analysis was used to produce equations for estimating flood frequency statistics from explanatory basin characteristics. Basin characteristics, including physical and climatic variables, were updated for all study streamgages using a geographical information system and geospatial source data. Screening for similar-sized nested basins eliminated hydrologically redundant sites, and screening for eligibility for analysis of explanatory variables eliminated regulated peaks, outburst peaks, and sites with indeterminate basin characteristics. An ordinary least‑squares regression used flood-frequency statistics and basin characteristics for 341 streamgages (284 in Alaska and 57 in Canada) to determine the most suitable combination of basin characteristics for a flood-frequency regression model and to explore regional grouping of streamgages for explaining variability in flood-frequency statistics across the study area. The most suitable model for explaining flood frequency used drainage area and mean annual precipitation as explanatory variables for the entire study area as a region. Final regression equations for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probability discharge in Alaska and conterminous basins in Canada were developed using a generalized least-squares regression. The average standard error of prediction for the regression equations for the various annual exceedance probabilities ranged from 69 to 82 percent, and the pseudo-coefficient of determination (pseudo-R2) ranged from 85 to 91 percent.The regional regression equations from this study were incorporated into the U.S. Geological Survey StreamStats program for a limited area of the State—the Cook Inlet Basin. StreamStats is a national web-based geographic information system application that facilitates retrieval of streamflow statistics and associated information. StreamStats retrieves published data for gaged sites and, for user-selected ungaged sites, delineates drainage areas from topographic and hydrographic data, computes basin characteristics, and computes flood frequency estimates using the regional regression equations.
Statistical plant set estimation using Schroeder-phased multisinusoidal input design
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.
Acceleration and Velocity Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truax, Roger
2016-01-01
A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an Autoregressive Moving Average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. shape sensing, fiber optic strain sensor, system equivalent reduction and expansion process.
NASA Technical Reports Server (NTRS)
Bogdanoff, J. L.; Kayser, K.; Krieger, W.
1977-01-01
The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.
Optimal Window and Lattice in Gabor Transform. Application to Audio Analysis.
Lachambre, Helene; Ricaud, Benjamin; Stempfel, Guillaume; Torrésani, Bruno; Wiesmeyr, Christoph; Onchis-Moaca, Darian
2015-01-01
This article deals with the use of optimal lattice and optimal window in Discrete Gabor Transform computation. In the case of a generalized Gaussian window, extending earlier contributions, we introduce an additional local window adaptation technique for non-stationary signals. We illustrate our approach and the earlier one by addressing three time-frequency analysis problems to show the improvements achieved by the use of optimal lattice and window: close frequencies distinction, frequency estimation and SNR estimation. The results are presented, when possible, with real world audio signals.
Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-01-01
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-12-14
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.
System and method for motor speed estimation of an electric motor
Lu, Bin [Kenosha, WI; Yan, Ting [Brookfield, WI; Luebke, Charles John [Sussex, WI; Sharma, Santosh Kumar [Viman Nagar, IN
2012-06-19
A system and method for a motor management system includes a computer readable storage medium and a processing unit. The processing unit configured to determine a voltage value of a voltage input to an alternating current (AC) motor, determine a frequency value of at least one of a voltage input and a current input to the AC motor, determine a load value from the AC motor, and access a set of motor nameplate data, where the set of motor nameplate data includes a rated power, a rated speed, a rated frequency, and a rated voltage of the AC motor. The processing unit is also configured to estimate a motor speed based on the voltage value, the frequency value, the load value, and the set of nameplate data and also store the motor speed on the computer readable storage medium.
Inman, Ernest J.
1997-01-01
Flood-frequency relations were computed for 28 urban stations, for 2-, 25-, and 100-year recurrence interval floods and the computations were compared to corresponding recurrence interval floods computed from the estimating equations from a 1995 investigation. Two stations were excluded from further comparisons or analyses because neither station had a significant flood during the period of observed record. The comparisons, based on the student's t-test statistics at the 0.05 level of significance, indicate that the mean residuals of the 25- and 100-year floods were negatively biased by 26.2 percent and 31.6 percent, respectively, at the 26 stations. However, the mean residuals of the 2-year floods were 2.5 percent lower than the mean of the 2-year floods computed from the equations, and were not significantly biased. The reason for this negative bias is that the period of observed record at the 26 stations was a relatively dry period. At 25 of the 26 stations, the two highest simulated peaks used to develop the estimating equations occurred many years before the observed record began. However, no attempt was made to adjust the estimating equations because higher peaks could occur after the period of observed record and an adjustment to the equations would cause an underestimation of design floods.
Ries, Kernell G.; Crouse, Michele Y.
2002-01-01
For many years, the U.S. Geological Survey (USGS) has been developing regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, these equations have been developed on a Statewide or metropolitan-area basis as part of cooperative study programs with specific State Departments of Transportation. In 1994, the USGS released a computer program titled the National Flood Frequency Program (NFF), which compiled all the USGS available regression equations for estimating the magnitude and frequency of floods in the United States and Puerto Rico. NFF was developed in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency. Since the initial release of NFF, the USGS has produced new equations for many areas of the Nation. A new version of NFF has been developed that incorporates these new equations and provides additional functionality and ease of use. NFF version 3 provides regression-equation estimates of flood-peak discharges for unregulated rural and urban watersheds, flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals. The Program also provides weighting techniques to improve estimates of flood-peak discharges for gaging stations and ungaged sites. The information provided by NFF should be useful to engineers and hydrologists for planning and design applications. This report describes the flood-regionalization techniques used in NFF and provides guidance on the applicability and limitations of the techniques. The NFF software and the documentation for the regression equations included in NFF are available at http://water.usgs.gov/software/nff.html.
Herdağdelen, Amaç; Marelli, Marco
2017-05-01
Corpus-based word frequencies are one of the most important predictors in language processing tasks. Frequencies based on conversational corpora (such as movie subtitles) are shown to better capture the variance in lexical decision tasks compared to traditional corpora. In this study, we show that frequencies computed from social media are currently the best frequency-based estimators of lexical decision reaction times (up to 3.6% increase in explained variance). The results are robust (observed for Twitter- and Facebook-based frequencies on American English and British English datasets) and are still substantial when we control for corpus size. © 2016 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Computation of Asteroid Proper Elements: Recent Advances
NASA Astrophysics Data System (ADS)
Knežević, Z.
2017-12-01
The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.
Peak-flow frequency for tributaries of the Colorado River downstream of Austin, Texas
Asquith, William H.
1998-01-01
Peak-flow frequency for 38 stations with at least 8 years of data in natural (unregulated and nonurbanized) basins was estimated on the basis of annual peak-streamflow data through water year 1995. Peak-flow frequency represents the peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, 250, and 500 years. The peak-flow frequency and drainage basin characteristics for the stations were used to develop two sets of regression equations to estimate peak-flow frequency for tributaries of the Colorado River in the study area. One set of equations was developed for contributing drainage areas less than 32 square miles, and another set was developed for contributing drainage areas greater than 32 square miles. A procedure is presented to estimate the peak discharge at sites where both sets of equations are considered applicable. Additionally, procedures are presented to compute the 50-, 67-, and 90-percent prediction interval for any estimation from the equations.
Ahearn, Elizabeth A.
2008-01-01
Flow durations, low-flow frequencies, and monthly median streamflows were computed for 91 continuous-record, streamflow-gaging stations in Connecticut with 10 or more years of record. Flow durations include the 99-, 98-, 97-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 25-, 20-, 10-, 5-, and 1-percent exceedances. Low-flow frequencies include the 7-day, 10-year (7Q10) low flow; 7-day, 2-year (7Q2) low flow; and 30-day, 2-year (30Q2) low flow. Streamflow estimates were computed for each station using data for the period of record through water year 2005. Estimates of low-flow statistics for 7 short-term (operated between 3 and 10 years) streamflow-gaging stations and 31 partial-record sites were computed. Low-flow estimates were made on the basis of the relation between base flows at a short-term station or partial-record site and concurrent daily mean streamflows at a nearby index station. The relation is defined by the Maintenance of Variance Extension, type 3 (MOVE.3) method. Several short-term stations and partial-record sites had poorly defined relations with nearby index stations; therefore, no low-flow statistics were derived for these sites. The estimated low-flow statistics for the short-term stations and partial-record sites include the 99-, 98-, 97-, 95-, 90-, and 85-percent flow durations; the 7-day, 10-year (7Q10) low flow; 7-day, 2-year (7Q2) low flow; and 30-day, 2-year (30Q2) low-flow frequencies; and the August median flow. Descriptive information on location and record length, measured basin characteristics, index stations correlated to the short-term station and partial-record sites, and estimated flow statistics are provided in this report for each station. Streamflow estimates from this study are stored on USGS's World Wide Web application 'StreamStats' (http://water.usgs.gov/osw/streamstats/connecticut.html).
Magnitude and Frequency of Floods for Urban and Small Rural Streams in Georgia, 2008
Gotvald, Anthony J.; Knaak, Andrew E.
2011-01-01
A study was conducted that updated methods for estimating the magnitude and frequency of floods in ungaged urban basins in Georgia that are not substantially affected by regulation or tidal fluctuations. Annual peak-flow data for urban streams from September 2008 were analyzed for 50 streamgaging stations (streamgages) in Georgia and 6 streamgages on adjacent urban streams in Florida and South Carolina having 10 or more years of data. Flood-frequency estimates were computed for the 56 urban streamgages by fitting logarithms of annual peak flows for each streamgage to a Pearson Type III distribution. Additionally, basin characteristics for the streamgages were computed by using a geographical information system and computer algorithms. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged urban basins in Georgia. In addition to the 56 urban streamgages, 171 rural streamgages were included in the regression analysis to maintain continuity between flood estimates for urban and rural basins as the basin characteristics pertaining to urbanization approach zero. Because 21 of the rural streamgages have drainage areas less than 1 square mile, the set of equations developed for this study can also be used for estimating small ungaged rural streams in Georgia. Flood-frequency estimates and basin characteristics for 227 streamgages were combined to form the final database used in the regional regression analysis. Four hydrologic regions were developed for Georgia. The final equations are functions of drainage area and percentage of impervious area for three of the regions and drainage area, percentage of developed land, and mean basin slope for the fourth region. Average standard errors of prediction for these regression equations range from 20.0 to 74.5 percent.
Designing Estimator/Predictor Digital Phase-Locked Loops
NASA Technical Reports Server (NTRS)
Statman, J. I.; Hurd, W. J.
1988-01-01
Signal delays in equipment compensated automatically. New approach to design of digital phase-locked loop (DPLL) incorporates concepts from estimation theory and involves decomposition of closed-loop transfer function into estimator and predictor. Estimator provides recursive estimates of phase, frequency, and higher order derivatives of phase with respect to time, while predictor compensates for delay, called "transport lag," caused by PLL equipment and by DPLL computations.
Modified fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1992-01-01
A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.
NASA Astrophysics Data System (ADS)
Kassem, M.; Soize, C.; Gagliardini, L.
2009-06-01
In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deupree, Robert G., E-mail: bdeupree@ap.smu.ca
2011-11-20
A rotating, two-dimensional stellar model is evolved to match the approximate conditions of {alpha} Oph. Both axisymmetric and nonaxisymmetric oscillation frequencies are computed for two-dimensional rotating models which approximate the properties of {alpha} Oph. These computed frequencies are compared to the observed frequencies. Oscillation calculations are made assuming the eigenfunction can be fitted with six Legendre polynomials, but comparison calculations with eight Legendre polynomials show the frequencies agree to within about 0.26% on average. The surface horizontal shape of the eigenfunctions for the two sets of assumed number of Legendre polynomials agrees less well, but all calculations show significant departuresmore » from that of a single Legendre polynomial. It is still possible to determine the large separation, although the small separation is more complicated to estimate. With the addition of the nonaxisymmetric modes with |m| {<=} 4, the frequency space becomes sufficiently dense that it is difficult to comment on the adequacy of the fit of the computed to the observed frequencies. While the nonaxisymmetric frequency mode splitting is no longer uniform, the frequency difference between the frequencies for positive and negative values of the same m remains 2m times the rotation rate.« less
An estimator for the standard deviation of a natural frequency. I.
NASA Technical Reports Server (NTRS)
Schiff, A. J.; Bogdanoff, J. L.
1971-01-01
A brief review of mean-square approximate systems is given. The case in which the masses are deterministic is considered first in the derivation of an estimator for the upper bound of the standard deviation of a natural frequency. Two examples presented include a two-degree-of-freedom system and a case in which the disorder in the springs is perfectly correlated. For purposes of comparison, a Monte Carlo simulation was done on a digital computer.
Jennings, M.E.; Thomas, W.O.; Riggs, H.C.
1994-01-01
For many years, the U.S. Geological Survey (USGS) has been involved in the development of regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally these equations have been developed on a statewide or metropolitan area basis as part of cooperative study programs with specific State Departments of Transportation or specific cities. The USGS, in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency, has compiled all the current (as of September 1993) statewide and metropolitan area regression equations into a micro-computer program titled the National Flood Frequency Program.This program includes regression equations for estimating flood-peak discharges and techniques for estimating a typical flood hydrograph for a given recurrence interval peak discharge for unregulated rural and urban watersheds. These techniques should be useful to engineers and hydrologists for planning and design applications. This report summarizes the statewide regression equations for rural watersheds in each State, summarizes the applicable metropolitan area or statewide regression equations for urban watersheds, describes the National Flood Frequency Program for making these computations, and provides much of the reference information on the extrapolation variables needed to run the program.
Performance Bounds on Micro-Doppler Estimation and Adaptive Waveform Design Using OFDM Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata; Barhen, Jacob; Glover, Charles Wayne
We analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a target having multiple rotating scatterers (e.g., rotor blades of a helicopter, propellers of a submarine). The presence of rotating scatterers introduces Doppler frequency modulation in the received signal by generating sidebands about the transmitted frequencies. This is called the micro-Doppler effects. The use of a frequency-diverse OFDM signal in this context enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. Therefore, to characterize the accuracy of micro-Doppler frequency estimation, we compute themore » Cram er-Rao Bound (CRB) on the angular-velocity estimate of the target while considering the scatterer responses as deterministic but unknown nuisance parameters. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the transmitting OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations at different values of the signal-to-noise ratio (SNR) and the number of OFDM subcarriers. The CRB values not only decrease with the increase in the SNR values, but also reduce as we increase the number of subcarriers implying the significance of frequency-diverse OFDM waveforms. The improvement in estimation accuracy due to the adaptive waveform design is also numerically analyzed. Interestingly, we find that the relative decrease in the CRBs on the angular-velocity estimate is more pronounced for larger number of OFDM subcarriers.« less
Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.
Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan
2016-09-24
This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1981-01-01
A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
2007-01-01
and frequency transfer ( TWSTFT ) were performed along three transatlantic links over the 6-month period 29 January – 31 July 2006. The GPSCPFT and... TWSTFT results were subtracted in order to estimate the combined uncertainty of the methods. The frequency values obtained from GPSCPFT and TWSTFT ...values were equal to or less than the frequency-stability values σy(GPSCPFT) – y( TWSTFT ) (τ) (or TheoBR (τ)) computed for the corresponding averaging
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
A New Method for Estimating the Effective Population Size from Allele Frequency Changes
Pollak, Edward
1983-01-01
A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-01-01
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm. PMID:29438317
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-02-13
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.
Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.
ERIC Educational Resources Information Center
Holland, Paul W.; Thayer, Dorothy T.
2000-01-01
Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…
Robust Real-Time Wide-Area Differential GPS Navigation
NASA Technical Reports Server (NTRS)
Yunck, Thomas P. (Inventor); Bertiger, William I. (Inventor); Lichten, Stephen M. (Inventor); Mannucci, Anthony J. (Inventor); Muellerschoen, Ronald J. (Inventor); Wu, Sien-Chong (Inventor)
1998-01-01
The present invention provides a method and a device for providing superior differential GPS positioning data. The system includes a group of GPS receiving ground stations covering a wide area of the Earth's surface. Unlike other differential GPS systems wherein the known position of each ground station is used to geometrically compute an ephemeris for each GPS satellite. the present system utilizes real-time computation of satellite orbits based on GPS data received from fixed ground stations through a Kalman-type filter/smoother whose output adjusts a real-time orbital model. ne orbital model produces and outputs orbital corrections allowing satellite ephemerides to be known with considerable greater accuracy than from die GPS system broadcasts. The modeled orbits are propagated ahead in time and differenced with actual pseudorange data to compute clock offsets at rapid intervals to compensate for SA clock dither. The orbital and dock calculations are based on dual frequency GPS data which allow computation of estimated signal delay at each ionospheric point. These delay data are used in real-time to construct and update an ionospheric shell map of total electron content which is output as part of the orbital correction data. thereby allowing single frequency users to estimate ionospheric delay with an accuracy approaching that of dual frequency users.
Asquith, William H.; Slade, R.M.
1999-01-01
The U.S. Geological Survey, in cooperation with the Texas Department of Transportation, has developed a computer program to estimate peak-streamflow frequency for ungaged sites in natural basins in Texas. Peak-streamflow frequency refers to the peak streamflows for recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Peak-streamflow frequency estimates are needed by planners, managers, and design engineers for flood-plain management; for objective assessment of flood risk; for cost-effective design of roads and bridges; and also for the desin of culverts, dams, levees, and other flood-control structures. The program estimates peak-streamflow frequency using a site-specific approach and a multivariate generalized least-squares linear regression. A site-specific approach differs from a traditional regional regression approach by developing unique equations to estimate peak-streamflow frequency specifically for the ungaged site. The stations included in the regression are selected using an informal cluster analysis that compares the basin characteristics of the ungaged site to the basin characteristics of all the stations in the data base. The program provides several choices for selecting the stations. Selecting the stations using cluster analysis ensures that the stations included in the regression will have the most pertinent information about flooding characteristics of the ungaged site and therefore provide the basis for potentially improved peak-streamflow frequency estimation. An evaluation of the site-specific approach in estimating peak-streamflow frequency for gaged sites indicates that the site-specific approach is at least as accurate as a traditional regional regression approach.
Blade frequency program for nonuniform helicopter rotors, with automated frequency search
NASA Technical Reports Server (NTRS)
Sadler, S. G.
1972-01-01
A computer program for determining the natural frequencies and normal modes of a lumped parameter model of a rotating, twisted beam, with nonuniform mass and elastic properties was developed. The program is used to solve the conditions existing in a helicopter rotor where the outboard end of the rotor has zero forces and moments. Three frequency search methods have been implemented. Including an automatic search technique, which allows the program to find up to the fifteen lowest natural frequencies without the necessity for input estimates of these frequencies.
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.
Asquith, William H.; Barbie, Dana L.
2014-01-01
Selected summary statistics (L-moments) and estimates of respective sampling variances were computed for the 35 streamgages lacking statistically significant trends. From the L-moments and estimated sampling variances, weighted means or regional values were computed for each L-moment. An example application is included demonstrating how the L-moments could be used to evaluate the magnitude and frequency of annual mean streamflow.
NASA Technical Reports Server (NTRS)
Su, Shin-Yi; Kessler, Donald J.
1991-01-01
The present study examines a very fast method of calculating the collision frequency between two low-eccentricity orbiting bodies for evaluating the evolution of earth-orbiting objects such as space debris. The results are very accurate and the required computer time is negligible. The method is now applied without modification to calculate the collision frequencies for moderately and highly eccentric orbits.
Peak-flow frequency estimates through 1994 for gaged streams in South Dakota
Burr, M.J.; Korkow, K.L.
1996-01-01
Annual peak-flow data are listed for 250 continuous-record and crest-stage gaging stations in South Dakota. Peak-flow frequency estimates for selected recurrence intervals ranging from 2 to 500 years are given for 234 of these 250 stations. The log-Pearson Type III procedure was used to compute the frequency relations for the 234 stations, which in 1994 included 105 active and 129 inactive stations. The log-Pearson Type III procedure is recommended by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data, 1982, "Guidelines for Determining Flood Flow Frequency."No peak-flow frequency estimates are given for 16 of the 250 stations because: (1) of extreme variability in data set; (2) more than 20 percent of years had no flow; (3) annual peak flows represent large outflow from a spring; (4) of insufficient peak-flow record subsequent to reservoir regulation; and (5) peak-flow records were combined with records from nearby stations.
Efficient methods for joint estimation of multiple fundamental frequencies in music signals
NASA Astrophysics Data System (ADS)
Pertusa, Antonio; Iñesta, José M.
2012-12-01
This study presents efficient techniques for multiple fundamental frequency estimation in music signals. The proposed methodology can infer harmonic patterns from a mixture considering interactions with other sources and evaluate them in a joint estimation scheme. For this purpose, a set of fundamental frequency candidates are first selected at each frame, and several hypothetical combinations of them are generated. Combinations are independently evaluated, and the most likely is selected taking into account the intensity and spectral smoothness of its inferred patterns. The method is extended considering adjacent frames in order to smooth the detection in time, and a pitch tracking stage is finally performed to increase the temporal coherence. The proposed algorithms were evaluated in MIREX contests yielding state of the art results with a very low computational burden.
Sun, Chao; Feng, Wenquan; Du, Songlin
2018-01-01
As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Wuming, E-mail: yangwuming@bnu.edu.cn, E-mail: yangwuming@ynao.ac.cn
The determination of the size of the convective core of main-sequence stars is usually dependent on the construction of models of stars. Here we introduce a method to estimate the radius of the convective core of main-sequence stars with masses between about 1.1 and 1.5 M {sub ⊙} from observed frequencies of low-degree p -modes. A formula is proposed to achieve the estimation. The values of the radius of the convective core of four known stars are successfully estimated by the formula. The radius of the convective core of KIC 9812850 estimated by the formula is 0.140 ± 0.028 Rmore » {sub ⊙}. In order to confirm this prediction, a grid of evolutionary models was computed. The value of the convective-core radius of the best-fit model of KIC 9812850 is 0.149 R {sub ⊙}, which is in good agreement with that estimated by the formula from observed frequencies. The formula aids in understanding the interior structure of stars directly from observed frequencies. The understanding is not dependent on the construction of models.« less
Regional equations for estimation of peak-streamflow frequency for natural basins in Texas
Asquith, William H.; Slade, Raymond M.
1997-01-01
Peak-streamflow frequency for 559 Texas stations with natural (unregulated and rural or nonurbanized) basins was estimated with annual peak-streamflow data through 1993. The peak-streamflow frequency and drainage-basin characteristics for the Texas stations were used to develop 16 sets of equations to estimate peak-streamflow frequency for ungaged natural stream sites in each of 11 regions in Texas. The relation between peak-streamflow frequency and contributing drainage area for 5 of the 11 regions is curvilinear, requiring that one set of equations be developed for drainage areas less than 32 square miles and another set be developed for drainage areas greater than 32 square miles. These equations, developed through multiple-regression analysis using weighted least squares, are based on the relation between peak-streamflow frequency and basin characteristics for streamflow-gaging stations. The regions represent areas with similar flood characteristics. The use and limitations of the regression equations also are discussed. Additionally, procedures are presented to compute the 50-, 67-, and 90-percent confidence limits for any estimation from the equations. Also, supplemental peak-streamflow frequency and basin characteristics for 105 selected stations bordering Texas are included in the report. This supplemental information will aid in interpretation of flood characteristics for sites near the state borders of Texas.
Radar sensitivity and antenna scan pattern study for a satellite-based Radar Wind Sounder (RAWS)
NASA Technical Reports Server (NTRS)
Stuart, Michael A.
1992-01-01
Modeling global atmospheric circulations and forecasting the weather would improve greatly if worldwide information on winds aloft were available. Recognition of this led to the inclusion of the LAser Wind Sounder (LAWS) system to measure Doppler shifts from aerosols in the planned for Earth Observation System (EOS). However, gaps will exist in LAWS coverage where heavy clouds are present. The RAdar Wind Sensor (RAWS) is an instrument that could fill these gaps by measuring Doppler shifts from clouds and rain. Previous studies conducted at the University of Kansas show RAWS as a feasible instrument. This thesis pertains to the signal-to-noise ratio (SNR) sensitivity, transmit waveform, and limitations to the antenna scan pattern of the RAWS system. A dop-size distribution model is selected and applied to the radar range equation for the sensitivity analysis. Six frequencies are used in computing the SNR for several cloud types to determine the optimal transmit frequency. the results show the use of two frequencies, one higher (94 GHz) to obtain sensitivity for thinner cloud, and a lower frequency (24 GHz) to obtain sensitivity for thinner cloud, and a lower frequency (24 GHz) for better penetration in rain, provide ample SNR. The waveform design supports covariance estimation processing. This estimator eliminates the Doppler ambiguities compounded by the selection of such high transmit frequencies, while providing an estimate of the mean frequency. the unambiguous range and velocity computation shows them to be within acceptable limits. The design goal for the RAWS system is to limit the wind-speed error to less than 1 ms(exp -1). Due to linear dependence between vectors for a three-vector scan pattern, a reasonable wind-speed error is unattainable. Only the two-vector scan pattern falls within the wind-error limits for azimuth angles between 16 deg to 70 deg. However, this scan only allows two components of the wind to be determined. As a result, a technique is then shown, based on the Z-R-V relationships, that permit the vertical component (i.e., rain) to be computed. Thus the horizontal wind components may be obtained form the covariance estimator and the vertical component from the reflectivity factor. Finally, a new candidate system is introduced which summarizes the parameters taken from previous RAWS studies, or those modified in this thesis.
NASA Technical Reports Server (NTRS)
Reddy, C. J.
1998-01-01
An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.
Arevalillo-Herraez, Miguel; Cobos, Maximo; Garcia-Pineda, Miguel
2017-03-01
In this paper, we present an effective algorithm to reduce the number of wraps in a 2D phase signal provided as input. The technique is based on an accurate estimate of the fundamental frequency of a 2D complex signal with the phase given by the input, and the removal of a dependent additive term from the phase map. Unlike existing methods based on the discrete Fourier transform (DFT), the frequency is computed by using noise-robust estimates that are not restricted to integer values. Then, to deal with the problem of a non-integer shift in the frequency domain, an equivalent operation is carried out on the original phase signal. This consists of the subtraction of a tilted plane whose slope is computed from the frequency, followed by a re-wrapping operation. The technique has been exhaustively tested on fringe projection profilometry (FPP) and magnetic resonance imaging (MRI) signals. In addition, the performance of several frequency estimation methods has been compared. The proposed methodology is particularly effective on FPP signals, showing a higher performance than the state-of-the-art wrap reduction approaches. In this context, it contributes to canceling the carrier effect at the same time as it eliminates any potential slope that affects the entire signal. Its effectiveness on other carrier-free phase signals, e.g., MRI, is limited to the case that inherent slopes are present in the phase data.
NASA Astrophysics Data System (ADS)
Bootsma, Gregory J.
X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.
Next-Generation MDAC Discrimination Procedure Using Multi-Dimensional Spectral Analyses
2007-09-01
explosions near the Lop Nor, Novaya Zemlya, Semipalatinsk , Nevada, and Indian test sites . We have computed regional phase spectra and are correcting... test sites as mainly due to differences in explosion P and S corner frequencies. Fisk (2007) used source model fits to estimate Pn, Pg, and Lg corner...frequencies for Nevada Test Site (NTS) explosions and found that Lg corner frequencies exhibit similar scaling with source size as for Pn and Pg
Interpolating Spherical Harmonics for Computing Antenna Patterns
2011-07-01
4∞. If gNF denotes the spline computed from the uniform partition of NF + 1 frequency points, the splines converge as O[N−4F ]: ‖gN − g‖∞ ≤ C0‖g(4...splines. There is the possibility of estimating the error ‖g− gNF ‖∞ even though the function g is unknown. Table 1 compares these unknown errors ‖g − gNF ...to the computable estimates ‖ gNF − g2NF ‖∞. The latter is a strong predictor of the unknown error. The triple bar is the sup-norm error over all the
NASA Astrophysics Data System (ADS)
Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.
2013-12-01
Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.
NASA Astrophysics Data System (ADS)
Grecu, M.; Tian, L.; Heymsfield, G. M.
2017-12-01
A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.
Ravicz, Michael E.; Rosowski, John J.
2012-01-01
The middle-ear input admittance relates sound power into the middle ear (ME) and sound pressure at the tympanic membrane (TM). ME input admittance was measured in the chinchilla ear canal as part of a larger study of sound power transmission through the ME into the inner ear. The middle ear was open, and the inner ear was intact or modified with small sensors inserted into the vestibule near the cochlear base. A simple model of the chinchilla ear canal, based on ear canal sound pressure measurements at two points along the canal and an assumption of plane-wave propagation, enables reliable estimates of YTM, the ME input admittance at the TM, from the admittance measured relatively far from the TM. YTM appears valid at frequencies as high as 17 kHz, a much higher frequency than previously reported. The real part of YTM decreases with frequency above 2 kHz. Effects of the inner-ear sensors (necessary for inner ear power computation) were small and generally limited to frequencies below 3 kHz. Computed power reflectance was ∼0.1 below 3.5 kHz, lower than with an intact ME below 2.5 kHz, and nearly 1 above 16 kHz. PMID:23039439
NASA Astrophysics Data System (ADS)
Sourbier, F.; Operto, S.; Virieux, J.
2006-12-01
We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor computes the corresponding sub-domain of the gradient. In the end, the gradient is centralized on the master processor using a collective communation. The gradient is scaled by the diagonal elements of the Hessian matrix. This scaling is computed only once per frequency before the first iteration of the inversion. Estimation of the diagonal terms of the Hessian requires performing one simulation per non redondant shot and receiver position. The same strategy that the one used for the gradient is used to compute the diagonal Hessian in parallel. This algorithm was applied to a dense wide-angle data set recorded by 100 OBSs in the eastern Nankai trough, offshore Japan. Thirteen frequencies ranging from 3 and 15 Hz were inverted. Tweny iterations per frequency were computed leading to 260 tomographic velocity models of increasing resolution. The velocity model dimensions are 105 km x 25 km corresponding to a finite-difference grid of 4201 x 1001 grid with a 25-m grid interval. The number of shot was 1005 and the number of inverted OBS gathers was 93. The inversion requires 20 days on 6 32-bits bi-processor nodes with 4 Gbytes of RAM memory per node when only the LU factorization is performed in parallel. Preliminary estimations of the time required to perform the inversion with the fully-parallelized code is 6 and 4 days using 20 and 50 processors respectively.
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Power strain imaging based on vibro-elastography techniques
NASA Astrophysics Data System (ADS)
Wen, Xu; Salcudean, S. E.
2007-03-01
This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.
Detection and imaging of moving objects with SAR by a joint space-time-frequency processing
NASA Astrophysics Data System (ADS)
Barbarossa, Sergio; Farina, Alfonso
This paper proposes a joint spacetime-frequency processing scheme for the detection and imaging of moving targets by Synthetic Aperture Radars (SAR). The method is based on the availability of an array antenna. The signals received by the array elements are combined, in a spacetime processor, to cancel the clutter. Then, they are analyzed in the time-frequency domain, by computing their Wigner-Ville Distribution (WVD), in order to estimate the instantaneous frequency, to be used for the successive phase compensation, necessary to produce a high resolution image.
Accuracy of physician-estimated probability of brain injury in children with minor head trauma.
Daymont, Carrie; Klassen, Terry P; Osmond, Martin H
2015-07-01
To evaluate the accuracy of physician estimates of the probability of intracranial injury in children with minor head trauma. This is a subanalysis of a large prospective multicentre cohort study performed from July 2001 to November 2005. During data collection for the derivation of a clinical prediction rule for children with minor head trauma, physicians indicated their estimate of the probability of brain injury visible on computed tomography (P-Injury) and the probability of injury requiring intervention (P-Intervention) by choosing one of the following options: 0%, 1%, 2%, 3%, 4%, 5%, 10%, 20%, 30%, 40%, 50%, 75%, 90%, and 100%. We compared observed frequencies to expected frequencies of injury using Pearson's χ2-test in analyses stratified by the level of each type of predicted probability and by year of age. In 3771 eligible subjects, the mean predicted risk was 4.6% (P-Injury) and 1.4% (P-Intervention). The observed frequency of injury was 4.1% (any injury) and 0.6% (intervention). For all levels of P-Injury from 1% to 40%, the observed frequency of injury was consistent with the expected frequency. The observed frequencies for the 50%, 75%, and 90% levels were lower than expected (p<0.05). For estimates of P-Intervention, the observed frequency was consistently higher than the expected frequency. Physicians underestimated risk for infants (mean P-Intervention 6.2%, actual risk 12.3%, p<0.001). Physician estimates of probability of any brain injury in children were collectively accurate for children with low and moderate degrees of predicted risk. Risk was underestimated in infants.
A fast, robust algorithm for power line interference cancellation in neural recording.
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-04-01
Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. The proposed algorithm features a highly robust operation, fast adaptation to interference variations, significant SNR improvement, low computational complexity and memory requirement and straightforward parameter adjustment. These features render the algorithm suitable for wearable and implantable sensor applications, where reliable and real-time cancellation of the interference is desired.
A fast, robust algorithm for power line interference cancellation in neural recording
NASA Astrophysics Data System (ADS)
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-04-01
Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed algorithm features a highly robust operation, fast adaptation to interference variations, significant SNR improvement, low computational complexity and memory requirement and straightforward parameter adjustment. These features render the algorithm suitable for wearable and implantable sensor applications, where reliable and real-time cancellation of the interference is desired.
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion article (Lenoir and Crucifix, 2018). All the methods presented in this paper are available to the reader in the Python package WAVEPAL.
NASA Astrophysics Data System (ADS)
Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin
2017-11-01
Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.
An automatic frequency control loop using overlapping DFTs (Discrete Fourier Transforms)
NASA Technical Reports Server (NTRS)
Aguirre, S.
1988-01-01
An automatic frequency control (AFC) loop is introduced and analyzed in detail. The new scheme is a generalization of the well known Cross Product AFC loop that uses running overlapping discrete Fourier transforms (DFTs) to create a discriminator curve. Linear analysis is included and supported with computer simulations. The algorithm is tested in a low carrier to noise ratio (CNR) dynamic environment, and the probability of loss of lock is estimated via computer simulations. The algorithm discussed is a suboptimum tracking scheme with a larger frequency error variance compared to an optimum strategy, but offers simplicity of implementation and a very low operating threshold CNR. This technique can be applied during the carrier acquisition and re-acquisition process in the Advanced Receiver.
Frequency domain system identification methods - Matrix fraction description approach
NASA Technical Reports Server (NTRS)
Horta, Luca G.; Juang, Jer-Nan
1993-01-01
This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.
Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels
NASA Astrophysics Data System (ADS)
Fusco, Tilde; Petrella, Angelo; Tanda, Mario
2009-12-01
The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.
Wiley, Jeffrey B.; Curran, Janet H.
2003-01-01
Methods for estimating daily mean flow-duration statistics for seven regions in Alaska and low-flow frequencies for one region, southeastern Alaska, were developed from daily mean discharges for streamflow-gaging stations in Alaska and conterminous basins in Canada. The 15-, 10-, 9-, 8-, 7-, 6-, 5-, 4-, 3-, 2-, and 1-percent duration flows were computed for the October-through-September water year for 222 stations in Alaska and conterminous basins in Canada. The 98-, 95-, 90-, 85-, 80-, 70-, 60-, and 50-percent duration flows were computed for the individual months of July, August, and September for 226 stations in Alaska and conterminous basins in Canada. The 98-, 95-, 90-, 85-, 80-, 70-, 60-, and 50-percent duration flows were computed for the season July-through-September for 65 stations in southeastern Alaska. The 7-day, 10-year and 7-day, 2-year low-flow frequencies for the season July-through-September were computed for 65 stations for most of southeastern Alaska. Low-flow analyses were limited to particular months or seasons in order to omit winter low flows, when ice effects reduce the quality of the records and validity of statistical assumptions. Regression equations for estimating the selected high-flow and low-flow statistics for the selected months and seasons for ungaged sites were developed from an ordinary-least-squares regression model using basin characteristics as independent variables. Drainage area and precipitation were significant explanatory variables for high flows, and drainage area, precipitation, mean basin elevation, and area of glaciers were significant explanatory variables for low flows. The estimating equations can be used at ungaged sites in Alaska and conterminous basins in Canada where streamflow regulation, streamflow diversion, urbanization, and natural damming and releasing of water do not affect the streamflow data for the given month or season. Standard errors of estimate ranged from 15 to 56 percent for high-duration flow statistics, 25 to greater than 500 percent for monthly low-duration flow statistics, 32 to 66 percent for seasonal low-duration flow statistics, and 53 to 64 percent for low-flow frequency statistics.
Investigation of spectral analysis techniques for randomly sampled velocimetry data
NASA Technical Reports Server (NTRS)
Sree, Dave
1993-01-01
It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable prefiltering technique. But, this increased bandwidth comes at the cost of the lower frequency estimates. The studies further showed that large data sets of the order of 100,000 points, or more, high data rates, and Poisson sampling are very crucial for obtaining reliable spectral estimates from randomly sampled data, such as LV data. Some of the results of the current study are presented.
Estimating the vibration level of an L-shaped beam using power flow techniques
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.
1986-01-01
The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.
Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix
NASA Astrophysics Data System (ADS)
Hagen, V. S.; Arntsen, B.; Raknes, E. B.
2017-12-01
Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Doppler-based motion compensation algorithm for focusing the signature of a rotorcraft.
Goldman, Geoffrey H
2013-02-01
A computationally efficient algorithm was developed and tested to compensate for the effects of motion on the acoustic signature of a rotorcraft. For target signatures with large spectral peaks that vary slowly in amplitude and have near constant frequency, the time-varying Doppler shift can be tracked and then removed from the data. The algorithm can be used to preprocess data for classification, tracking, and nulling algorithms. The algorithm was tested on rotorcraft data. The average instantaneous frequency of the first harmonic of a rotorcraft was tracked with a fixed-lag smoother. Then, state space estimates of the frequency were used to calculate a time warping that removed the effect of a time-varying Doppler shift from the data. The algorithm was evaluated by analyzing the increase in the amplitude of the harmonics in the spectrum of a rotorcraft. The results depended upon the frequency of the harmonics and the processing interval duration. Under good conditions, the results for the fundamental frequency of the target (~11 Hz) almost achieved an estimated upper bound. The results for higher frequency harmonics had larger increases in the amplitude of the peaks, but significantly lower than the estimated upper bounds.
Radio Wave Propagation over Salem
NASA Astrophysics Data System (ADS)
Jaiswal, R. S.; Uma, S.; Raj, M. V. A.
2007-07-01
In this paper study of rainfall has been carried out over Salem, a place in Southern India. Rainfall rate values have been recorded using a fast response rain gauge installed at Sona College of Technology. The derived rainfall rates have been used to estimate attenuation in the 10-100 GHz frequency range. Using the estimated co-polar attenuation cross polar discriminations (XPD) have been computed using ITU-R(2002) model in the 10-35 GHz range. The study shows that attenuation and cross polarization vary with frequency, elevation angle and rainfall rate. The study also depicts the cumulative distribution of rainfall rate, attenuation and XPD.
Power flow prediction in vibrating systems via model reduction
NASA Astrophysics Data System (ADS)
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
Multiscaling properties of coastal waters particle size distribution from LISST in situ measurements
NASA Astrophysics Data System (ADS)
Pannimpullath Remanan, R.; Schmitt, F. G.; Loisel, H.; Mériaux, X.
2013-12-01
An eulerian high frequency sampling of particle size distribution (PSD) is performed during 5 tidal cycles (65 hours) in a coastal environment of the eastern English Channel at 1 Hz. The particle data are recorded using a LISST-100x type C (Laser In Situ Scattering and Transmissometry, Sequoia Scientific), recording volume concentrations of particles having diameters ranging from 2.5 to 500 mu in 32 size classes in logarithmic scale. This enables the estimation at each time step (every second) of the probability density function of particle sizes. At every time step, the pdf of PSD is hyperbolic. We can thus estimate PSD slope time series. Power spectral analysis shows that the mean diameter of the suspended particles is scaling at high frequencies (from 1s to 1000s). The scaling properties of particle sizes is studied by computing the moment function, from the pdf of the size distribution. Moment functions at many different time scales (from 1s to 1000 s) are computed and their scaling properties considered. The Shannon entropy at each time scale is also estimated and is related to other parameters. The multiscaling properties of the turbidity (coefficient cp computed from the LISST) are also consider on the same time scales, using Empirical Mode Decomposition.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan
2017-01-01
In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763
Acceleration and Velocity Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truax, Roger
2015-01-01
A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.
Van Norman, Ethan R; Nelson, Peter M; Parker, David C
2017-09-01
Computer adaptive tests (CATs) hold promise to monitor student progress within multitiered systems of support. However, the relationship between how long and how often data are collected and the technical adequacy of growth estimates from CATs has not been explored. Given CAT administration times, it is important to identify optimal data collection schedules to minimize missed instructional time. We used simulation methodology to investigate how the duration and frequency of data collection influenced the reliability, validity, and precision of growth estimates from a math CAT. A progress monitoring dataset of 746 Grade 4, 664 Grade 5, and 400 Grade 6 students from 40 schools in the upper Midwest was used to generate model parameters. Across grades, 53% of students were female and 53% were White. Grade level was not as influential as the duration and frequency of data collection on the technical adequacy of growth estimates. Low-stakes decisions were possible after 14-18 weeks when data were collected weekly (420-540 min of assessment), 20-24 weeks when collected every other week (300-360 min of assessment), and 20-28 weeks (150-210 min of assessment) when data were collected once a month, depending on student grade level. The validity and precision of growth estimates improved when the duration and frequency of progress monitoring increased. Given the amount of time required to obtain technically adequate growth estimates in the present study, results highlight the importance of weighing the potential costs of missed instructional time relative to other types of assessments, such as curriculum-based measures. Implications for practice, research, as well as future directions are also discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Differential sampling for fast frequency acquisition via adaptive extended least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1987-01-01
This paper presents a differential signal model along with appropriate sampling techinques for least squares estimation of the frequency and frequency derivatives and possibly the phase and amplitude of a sinusoid received in the presence of noise. The proposed algorithm is recursive in mesurements and thus the computational requirement increases only linearly with the number of measurements. The dimension of the state vector in the proposed algorithm does not depend upon the number of measurements and is quite small, typically around four. This is an advantage when compared to previous algorithms wherein the dimension of the state vector increases monotonically with the product of the frequency uncertainty and the observation period. Such a computational simplification may possibly result in some loss of optimality. However, by applying the sampling techniques of the paper such a possible loss in optimality can made small.
Choudhuri, Samir; Bharadwaj, Somnath; Roy, Nirupam; Ghosh, Abhik; Ali, Sk Saiyad
2016-06-11
It is important to correctly subtract point sources from radio-interferometric data in order to measure the power spectrum of diffuse radiation like the Galactic synchrotron or the Epoch of Reionization 21-cm signal. It is computationally very expensive and challenging to image a very large area and accurately subtract all the point sources from the image. The problem is particularly severe at the sidelobes and the outer parts of the main lobe where the antenna response is highly frequency dependent and the calibration also differs from that of the phase centre. Here, we show that it is possible to overcome this problem by tapering the sky response. Using simulated 150 MHz observations, we demonstrate that it is possible to suppress the contribution due to point sources from the outer parts by using the Tapered Gridded Estimator to measure the angular power spectrum C ℓ of the sky signal. We also show from the simulation that this method can self-consistently compute the noise bias and accurately subtract it to provide an unbiased estimation of C ℓ .
The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.
Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels
2013-07-21
Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.
Accurate ab initio quartic force fields for the ions HCO(+) and HOC(+)
NASA Technical Reports Server (NTRS)
Martin, J. M. L.; Taylor, Peter R.; Lee, Timothy J.
1993-01-01
The quartic force fields of HCO(+) and HOC(+) have been computed using augmented coupled cluster methods and basis sets of spdf and spdfg quality. Calculations on HCN, CO, and N2 have been performed to assist in calibrating the computed results. Going from an spdf to an spdfg basis shortens triple bonds by about 0.004 A, and increases the corresponding harmonic frequency by 10-20/cm, leaving bond distances about 0.003 A too long and triple bond stretching frequencies about 5/cm too low. Accurate estimates for the bond distances, fundamental frequencies, and thermochemical quantities are given. HOC(+) lies 37.8 +/- 0.5 kcal/mol (0 K) above HCO(+); the classical barrier height for proton exchange is 76.7 +/- 1.0 kcal/mol.
NASA Astrophysics Data System (ADS)
Liu, Fushun; Liu, Chengcheng; Chen, Jiefeng; Wang, Bin
2017-08-01
The key concept of spectrum response estimation with commercial software, such as the SESAM software tool, typically includes two main steps: finding a suitable loading spectrum and computing the response amplitude operators (RAOs) subjected to a frequency-specified wave component. In this paper, we propose a nontraditional spectrum response estimation method that uses a numerical representation of the retardation functions. Based on estimated added mass and damping matrices of the structure, we decompose and replace the convolution terms with a series of poles and corresponding residues in the Laplace domain. Then, we estimate the power density corresponding to each frequency component using the improved periodogram method. The advantage of this approach is that the frequency-dependent motion equations in the time domain can be transformed into the Laplace domain without requiring Laplace-domain expressions for the added mass and damping. To validate the proposed method, we use a numerical semi-submerged pontoon from the SESAM. The numerical results show that the responses of the proposed method match well with those obtained from the traditional method. Furthermore, the estimated spectrum also matches well, which indicates its potential application to deep-water floating structures.
Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions
NASA Astrophysics Data System (ADS)
Vermeulen, Petrus
2017-04-01
A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.
Contrast computation methods for interferometric measurement of sensor modulation transfer function
NASA Astrophysics Data System (ADS)
Battula, Tharun; Georgiev, Todor; Gille, Jennifer; Goma, Sergio
2018-01-01
Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.
SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS
NASA Technical Reports Server (NTRS)
Brownlow, J. D.
1994-01-01
The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval are removed by least-squares detrending. As many as ten channels of data may be analyzed at one time. Both tabular and plotted output may be generated by the SPA program. This program is written in FORTRAN IV and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 142K (octal) of 60 bit words. This core requirement can be reduced by segmentation of the program. The SPA program was developed in 1978.
Blowers, Paul; Hollingshead, Kyle
2009-05-21
In this work, the global warming potential (GWP) of methylene fluoride (CH(2)F(2)), or HFC-32, is estimated through computational chemistry methods. We find our computational chemistry approach reproduces well all phenomena important for predicting global warming potentials. Geometries predicted using the B3LYP/6-311g** method were in good agreement with experiment, although some other computational methods performed slightly better. Frequencies needed for both partition function calculations in transition-state theory and infrared intensities needed for radiative forcing estimates agreed well with experiment compared to other computational methods. A modified CBS-RAD method used to obtain energies led to superior results to all other previous heat of reaction estimates and most barrier height calculations when the B3LYP/6-311g** optimized geometry was used as the base structure. Use of the small-curvature tunneling correction and a hindered rotor treatment where appropriate led to accurate reaction rate constants and radiative forcing estimates without requiring any experimental data. Atmospheric lifetimes from theory at 277 K were indistinguishable from experimental results, as were the final global warming potentials compared to experiment. This is the first time entirely computational methods have been applied to estimate a global warming potential for a chemical, and we have found the approach to be robust, inexpensive, and accurate compared to prior experimental results. This methodology was subsequently used to estimate GWPs for three additional species [methane (CH(4)); fluoromethane (CH(3)F), or HFC-41; and fluoroform (CHF(3)), or HFC-23], where estimations also compare favorably to experimental values.
B-2 Extremely High Frequency SATCOM and Computer Increment 1 (B-2 EHF Inc 1)
2015-12-01
Confidence Level Confidence Level of cost estimate for current APB: 55% This APB reflects cost and funding data based on the B-2 EHF Increment I SCP...This cost estimate was quantified at the Mean (~55%) confidence level . Total Quantity Quantity SAR Baseline Production Estimate Current APB...Production Estimate Econ Qty Sch Eng Est Oth Spt Total 33.624 -0.350 1.381 0.375 0.000 -6.075 0.000 -0.620 -5.289 28.335 Current SAR Baseline to Current
Submillimeter, millimeter, and microwave spectral line catalogue
NASA Technical Reports Server (NTRS)
Poynter, R. L.; Pickett, H. M.
1980-01-01
A computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between O and 3000 GHz (such as; wavelengths longer than 100 m) is discussed. The catalogue was used as a planning guide and as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances.
An analytical formula for the longitudinal resonance frequencies of a fluid-filled crack
NASA Astrophysics Data System (ADS)
Maeda, Y.; Kumagai, H.
2013-12-01
The fluid-filled crack model (Chouet, 1986, JGR) simulates the resonances of a rectangular crack filled with an inviscid fluid embedded in a homogeneous isotropic elastic medium. The model demonstrates the existence of a slow wave, known as the crack wave, that propagates along the solid-fluid interfaces. The wave velocity depends on the crack stiffness. The model has been used to interpret the peak frequencies of long-period (LP) and very long period (VLP) seismic events at various volcanoes (Chouet and Matoza, 2013, JVGR). Up to now, crack model simulations have been performed using the finite difference (Chouet, 1986) and boundary integral (Yamamoto and Kawakatsu, 2008, GJI) methods. These methods require computationally extensive procedures to estimate the complex frequencies of crack resonance modes. Establishing an easier way to calculate the frequencies of crack resonances would help understanding of the observed frequencies. In this presentation, we propose a simple analytical formula for the longitudinal resonance frequencies of a fluid-filled crack. We first evaluated the analytical expression proposed by Kumagai (2009, Encyc. Complex. Sys. Sci.) through a comparison of the expression with the peak frequencies computed by a 2D version of the FDM code of Chouet (1986). Our comparison revealed that the equation of Kumagai (2009) shows discrepancies with the resonant frequencies computed by the FDM. We then modified the formula as fmL = (m-1)a/[2L(1+2ɛmLC)1/2], (1) where L is the crack length, a is the velocity of sound in the fluid, C is the crack stiffness, m is a positive integer defined such that the wavelength of the normal displacement on the crack surface is 2L/m, and ɛmL is a constant that depends on the longitudinal resonance modes. Excellent fits were obtained between the peak frequencies calculated by the FDM and by Eq. (1), suggesting that this equation is suitable for the resonant frequencies. We also performed 3D FDM computations of the longitudinal mode resonances. The peak frequencies computed by the FDM are well fitted by Eq. (1). The best-fit ɛmL values are different from those for 2D and depend on W/L, where W is the crack width. Eq. (1) shows that fmL is a simple analytical function of a/L and C given m and W/L. This enables simple and rapid interpretations of the source processes of LP events, including estimation of the fluid properties and crack geometries as well as identification of the resonance modes of the individual peak frequencies. LP events at volcanoes often exhibit peak frequency variations. In such cases, the frequency variations can be easily converted to variations in the fluid properties and crack geometries. We showed that Eq. (1) is consistent with the analytical solution for an infinite crack given by Ferrazzini and Aki (1987, JGR). Although a theoretical derivation of Eq. (1) was not obtained yet, Eq. (1) is consistent with the frequencies expected from the wavelengths of the fluid pressure variation.
Poisson point process modeling for polyphonic music transcription.
Peeling, Paul; Li, Chung-fai; Godsill, Simon
2007-04-01
Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.
Southard, Rodney E.
2013-01-01
The weather and precipitation patterns in Missouri vary considerably from year to year. In 2008, the statewide average rainfall was 57.34 inches and in 2012, the statewide average rainfall was 30.64 inches. This variability in precipitation and resulting streamflow in Missouri underlies the necessity for water managers and users to have reliable streamflow statistics and a means to compute select statistics at ungaged locations for a better understanding of water availability. Knowledge of surface-water availability is dependent on the streamflow data that have been collected and analyzed by the U.S. Geological Survey for more than 100 years at approximately 350 streamgages throughout Missouri. The U.S. Geological Survey, in cooperation with the Missouri Department of Natural Resources, computed streamflow statistics at streamgages through the 2010 water year, defined periods of drought and defined methods to estimate streamflow statistics at ungaged locations, and developed regional regression equations to compute selected streamflow statistics at ungaged locations. Streamflow statistics and flow durations were computed for 532 streamgages in Missouri and in neighboring States of Missouri. For streamgages with more than 10 years of record, Kendall’s tau was computed to evaluate for trends in streamflow data. If trends were detected, the variable length method was used to define the period of no trend. Water years were removed from the dataset from the beginning of the record for a streamgage until no trend was detected. Low-flow frequency statistics were then computed for the entire period of record and for the period of no trend if 10 or more years of record were available for each analysis. Three methods are presented for computing selected streamflow statistics at ungaged locations. The first method uses power curve equations developed for 28 selected streams in Missouri and neighboring States that have multiple streamgages on the same streams. Statistical estimates on one of these streams can be calculated at an ungaged location that has a drainage area that is between 40 percent of the drainage area of the farthest upstream streamgage and within 150 percent of the drainage area of the farthest downstream streamgage along the stream of interest. The second method may be used on any stream with a streamgage that has operated for 10 years or longer and for which anthropogenic effects have not changed the low-flow characteristics at the ungaged location since collection of the streamflow data. A ratio of drainage area of the stream at the ungaged location to the drainage area of the stream at the streamgage was computed to estimate the statistic at the ungaged location. The range of applicability is between 40- and 150-percent of the drainage area of the streamgage, and the ungaged location must be located on the same stream as the streamgage. The third method uses regional regression equations to estimate selected low-flow frequency statistics for unregulated streams in Missouri. This report presents regression equations to estimate frequency statistics for the 10-year recurrence interval and for the N-day durations of 1, 2, 3, 7, 10, 30, and 60 days. Basin and climatic characteristics were computed using geographic information system software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses based on existing digital geospatial data and previous studies. Spatial analyses for geographical bias in the predictive accuracy of the regional regression equations defined three low-flow regions with the State representing the three major physiographic provinces in Missouri. Region 1 includes the Central Lowlands, Region 2 includes the Ozark Plateaus, and Region 3 includes the Mississippi Alluvial Plain. A total of 207 streamgages were used in the regression analyses for the regional equations. Of the 207 U.S. Geological Survey streamgages, 77 were located in Region 1, 120 were located in Region 2, and 10 were located in Region 3. Streamgages located outside of Missouri were selected to extend the range of data used for the independent variables in the regression analyses. Streamgages included in the regression analyses had 10 or more years of record and were considered to be affected minimally by anthropogenic activities or trends. Regional regression analyses identified three characteristics as statistically significant for the development of regional equations. For Region 1, drainage area, longest flow path, and streamflow-variability index were statistically significant. The range in the standard error of estimate for Region 1 is 79.6 to 94.2 percent. For Region 2, drainage area and streamflow variability index were statistically significant, and the range in the standard error of estimate is 48.2 to 72.1 percent. For Region 3, drainage area and streamflow-variability index also were statistically significant with a range in the standard error of estimate of 48.1 to 96.2 percent. Limitations on the use of estimating low-flow frequency statistics at ungaged locations are dependent on the method used. The first method outlined for use in Missouri, power curve equations, were developed to estimate the selected statistics for ungaged locations on 28 selected streams with multiple streamgages located on the same stream. A second method uses a drainage-area ratio to compute statistics at an ungaged location using data from a single streamgage on the same stream with 10 or more years of record. Ungaged locations on these streams may use the ratio of the drainage area at an ungaged location to the drainage area at a streamgage location to scale the selected statistic value from the streamgage location to the ungaged location. This method can be used if the drainage area of the ungaged location is within 40 to 150 percent of the streamgage drainage area. The third method is the use of the regional regression equations. The limits for the use of these equations are based on the ranges of the characteristics used as independent variables and that streams must be affected minimally by anthropogenic activities.
Wigner-Hough/Radon Transform for GPS Post-Correlation Integration (Preprint)
2007-09-01
Wigner - Ville distribution ( WVD ) is a well known method to estimate instantaneous frequency, which appears as a...Barbarossa, 1996]. In this method, the Wigner - Ville distribution ( WVD ) is used to represent the signal energy in the time-frequency plane while the...its Wigner - Ville 4 distribution or WVD is computed as: ∫ +∞ ∞− −−+= τττ τπ detxtxftW fj 2* ) 2 () 2 (),( (4) where * stands for complex
A uniform technique for flood frequency analysis.
Thomas, W.O.
1985-01-01
This uniform technique consisted of fitting the logarithms of annual peak discharges to a Pearson Type III distribution using the method of moments. The objective was to adopt a consistent approach for the estimation of floodflow frequencies that could be used in computing average annual flood losses for project evaluation. In addition, a consistent approach was needed for defining equitable flood-hazard zones as part of the National Flood Insurance Program. -from ASCE Publications Information
An estimate of field size distributions for selected sites in the major grain producing countries
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1977-01-01
The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.
Fast and accurate read-out of interferometric optical fiber sensors
NASA Astrophysics Data System (ADS)
Bartholsen, Ingebrigt; Hjelme, Dag R.
2016-03-01
We present results from an evaluation of phase and frequency estimation algorithms for read-out instrumentation of interferometric sensors. Tests on interrogating a micro Fabry-Perot sensor made of semi-spherical stimuli-responsive hydrogel immobilized on a single mode fiber end face, shows that an iterative quadrature demodulation technique (IQDT) implemented on a 32-bit microcontroller unit can achieve an absolute length accuracy of ±50 nm and length change accuracy of ±3 nm using an 80 nm SLED source and a grating spectrometer for interrogation. The mean absolute error for the frequency estimator is a factor 3 larger than the theoretical lower bound for a maximum likelihood estimator. The corresponding factor for the phase estimator is 1.3. The computation time for the IQDT algorithm is reduced by a factor 1000 compared to the full QDT for the same accuracy requirement.
A satellite-based radar wind sensor
NASA Technical Reports Server (NTRS)
Xin, Weizhuang
1991-01-01
The objective is to investigate the application of Doppler radar systems for global wind measurement. A model of the satellite-based radar wind sounder (RAWS) is discussed, and many critical problems in the designing process, such as the antenna scan pattern, tracking the Doppler shift caused by satellite motion, and backscattering of radar signals from different types of clouds, are discussed along with their computer simulations. In addition, algorithms for measuring mean frequency of radar echoes, such as the Fast Fourier Transform (FFT) estimator, the covariance estimator, and the estimators based on autoregressive models, are discussed. Monte Carlo computer simulations were used to compare the performance of these algorithms. Anti-alias methods are discussed for the FFT and the autoregressive methods. Several algorithms for reducing radar ambiguity were studied, such as random phase coding methods and staggered pulse repitition frequncy (PRF) methods. Computer simulations showed that these methods are not applicable to the RAWS because of the broad spectral widths of the radar echoes from clouds. A waveform modulation method using the concept of spread spectrum and correlation detection was developed to solve the radar ambiguity. Radar ambiguity functions were used to analyze the effective signal-to-noise ratios for the waveform modulation method. The results showed that, with suitable bandwidth product and modulation of the waveform, this method can achieve the desired maximum range and maximum frequency of the radar system.
A fast estimation of shock wave pressure based on trend identification
NASA Astrophysics Data System (ADS)
Yao, Zhenjian; Wang, Zhongyu; Wang, Chenchen; Lv, Jing
2018-04-01
In this paper, a fast method based on trend identification is proposed to accurately estimate the shock wave pressure in a dynamic measurement. Firstly, the collected output signal of the pressure sensor is reconstructed by discrete cosine transform (DCT) to reduce the computational complexity for the subsequent steps. Secondly, the empirical mode decomposition (EMD) is applied to decompose the reconstructed signal into several components with different frequency-bands, and the last few low-frequency components are chosen to recover the trend of the reconstructed signal. In the meantime, the optimal component number is determined based on the correlation coefficient and the normalized Euclidean distance between the trend and the reconstructed signal. Thirdly, with the areas under the gradient curve of the trend signal, the stable interval that produces the minimum can be easily identified. As a result, the stable value of the output signal is achieved in this interval. Finally, the shock wave pressure can be estimated according to the stable value of the output signal and the sensitivity of the sensor in the dynamic measurement. A series of shock wave pressure measurements are carried out with a shock tube system to validate the performance of this method. The experimental results show that the proposed method works well in shock wave pressure estimation. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing approaches in both estimation accuracy and computational efficiency.
Model synthesis in frequency analysis of Missouri floods
Hauth, Leland D.
1974-01-01
Synthetic flood records for 43 small-stream sites aided in definition of techniques for estimating the magnitude and frequency of floods in Missouri. The long-term synthetic flood records were generated by use of a digital computer model of the rainfall-runoff process. A relatively short period of concurrent rainfall and runoff data observed at each of the 43 sites was used to calibrate the model, and rainfall records covering from 66 to 78 years for four Missouri sites and pan-evaporation data were used to generate the synthetic records. Flood magnitude and frequency characteristics of both the synthetic records and observed long-term flood records available for 109 large-stream sites were used in a multiple-regression analysis to define relations for estimating future flood characteristics at ungaged sites. That analysis indicated that drainage basin size and slope were the most useful estimating variables. It also indicated that a more complex regression model than the commonly used log-linear one was needed for the range of drainage basin sizes available in this study.
Accurate Evaluation Method of Molecular Binding Affinity from Fluctuation Frequency
NASA Astrophysics Data System (ADS)
Hoshino, Tyuji; Iwamoto, Koji; Ode, Hirotaka; Ohdomari, Iwao
2008-05-01
Exact estimation of the molecular binding affinity is significantly important for drug discovery. The energy calculation is a direct method to compute the strength of the interaction between two molecules. This energetic approach is, however, not accurate enough to evaluate a slight difference in binding affinity when distinguishing a prospective substance from dozens of candidates for medicine. Hence more accurate estimation of drug efficacy in a computer is currently demanded. Previously we proposed a concept of estimating molecular binding affinity, focusing on the fluctuation at an interface between two molecules. The aim of this paper is to demonstrate the compatibility between the proposed computational technique and experimental measurements, through several examples for computer simulations of an association of human immunodeficiency virus type-1 (HIV-1) protease and its inhibitor (an example for a drug-enzyme binding), a complexation of an antigen and its antibody (an example for a protein-protein binding), and a combination of estrogen receptor and its ligand chemicals (an example for a ligand-receptor binding). The proposed affinity estimation has proven to be a promising technique in the advanced stage of the discovery and the design of drugs.
Chiu, Su-Chin; Lin, Te-Ming; Lin, Jyh-Miin; Chung, Hsiao-Wen; Ko, Cheng-Wen; Büchert, Martin; Bock, Michael
2017-09-01
To investigate possible errors in T1 and T2 quantification via MR fingerprinting with balanced steady-state free precession readout in the presence of intra-voxel phase dispersion and RF pulse profile imperfections, using computer simulations based on Bloch equations. A pulse sequence with TR changing in a Perlin noise pattern and a nearly sinusoidal pattern of flip angle following an initial 180-degree inversion pulse was employed. Gaussian distributions of off-resonance frequency were assumed for intra-voxel phase dispersion effects. Slice profiles of sinc-shaped RF pulses were computed to investigate flip angle profile influences. Following identification of the best fit between the acquisition signals and those established in the dictionary based on known parameters, estimation errors were reported. In vivo experiments were performed at 3T to examine the results. Slight intra-voxel phase dispersion with standard deviations from 1 to 3Hz resulted in prominent T2 under-estimations, particularly at large T2 values. T1 and off-resonance frequencies were relatively unaffected. Slice profile imperfections led to under-estimations of T1, which became greater as regional off-resonance frequencies increased, but could be corrected by including slice profile effects in the dictionary. Results from brain imaging experiments in vivo agreed with the simulation results qualitatively. MR fingerprinting using balanced SSFP readout in the presence of intra-voxel phase dispersion and imperfect slice profile leads to inaccuracies in quantitative estimations of the relaxation times. Copyright © 2017 Elsevier Inc. All rights reserved.
Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications
2013-01-01
Background Time-Frequency analysis of electroencephalogram (EEG) during different mental tasks received significant attention. As EEG is non-stationary, time-frequency analysis is essential to analyze brain states during different mental tasks. Further, the time-frequency information of EEG signal can be used as a feature for classification in brain-computer interface (BCI) applications. Methods To accurately model the EEG, band-limited multiple Fourier linear combiner (BMFLC), a linear combination of truncated multiple Fourier series models is employed. A state-space model for BMFLC in combination with Kalman filter/smoother is developed to obtain accurate adaptive estimation. By virtue of construction, BMFLC with Kalman filter/smoother provides accurate time-frequency decomposition of the bandlimited signal. Results The proposed method is computationally fast and is suitable for real-time BCI applications. To evaluate the proposed algorithm, a comparison with short-time Fourier transform (STFT) and continuous wavelet transform (CWT) for both synthesized and real EEG data is performed in this paper. The proposed method is applied to BCI Competition data IV for ERD detection in comparison with existing methods. Conclusions Results show that the proposed algorithm can provide optimal time-frequency resolution as compared to STFT and CWT. For ERD detection, BMFLC-KF outperforms STFT and BMFLC-KS in real-time applicability with low computational requirement. PMID:24274109
Estimating wheat growth with radar vegetation indices
USDA-ARS?s Scientific Manuscript database
In this study, we computed the Radar Vegetation Index (RVI) using observations made with a ground based multi-frequency polarimetric scatterometer system over an entire wheat growth period. The temporal variations of the backscattering coefficients for L-, C-, and X-band, RVI, vegetation water conte...
NASA Astrophysics Data System (ADS)
Onwuemeka, J.; Liu, Y.; Harrington, R. M.; Peña-Castro, A. F.; Rodriguez Padilla, A. M.; Darbyshire, F. A.
2017-12-01
The Charlevoix Seismic Zone (CSZ), located in eastern Canada, experiences a high rate of intraplate earthquakes, hosting more than six M >6 events since the 17th century. The seismicity rate is similarly high in the Western Quebec seismic zone (WQSZ) where an MN 5.2 event was reported on May 17, 2013. A good understanding of seismicity and its relation to the St-Lawrence paleorift system requires information about event source properties, such as static stress drop and fault orientation (via focal mechanism solutions). In this study, we conduct a systematic estimate of event source parameters using 1) hypoDD to relocate event hypocenters, 2) spectral analysis to derive corner frequency, magnitude, and hence static stress drops, and 3) first arrival polarities to derive focal mechanism solutions of selected events. We use a combined dataset for 817 earthquakes cataloged between June 2012 and May 2017 from the Canadian National Seismograph Network (CNSN), and temporary deployments from the QM-III Earthscope FlexArray and McGill seismic networks. We first relocate 450 events using P and S-wave differential travel-times refined with waveform cross-correlation, and compute focal mechanism solutions for all events with impulsive P-wave arrivals at a minimum of 8 stations using the hybridMT moment tensor inversion algorithm. We then determine corner frequency and seismic moment values by fitting S-wave spectra on transverse components at all stations for all events. We choose the final corner frequency and moment values for each event using the median estimate at all stations. We use the corner frequency and moment estimates to calculate moment magnitudes, static stress-drop values and rupture radii, assuming a circular rupture model. We also investigate scaling relationships between parameters, directivity, and compute apparent source dimensions and source time functions of 15 M 2.4+ events from second-degree moment estimates. To the first-order, source dimension estimates from both methods generally agree. We observe higher corner frequencies and higher stress drops (ranging from 20 to 70 MPa) typical of intraplate seismicity in comparison with interplate seismicity. We follow similar approaches to studying 25 MN 3+ events reported in the WQSZ using data recorded by the CNSN and USArray Transportable Array.
Multichannel Phase and Power Detector
NASA Technical Reports Server (NTRS)
Li, Samuel; Lux, James; McMaster, Robert; Boas, Amy
2006-01-01
An electronic signal-processing system determines the phases of input signals arriving in multiple channels, relative to the phase of a reference signal with which the input signals are known to be coherent in both phase and frequency. The system also gives an estimate of the power levels of the input signals. A prototype of the system has four input channels that handle signals at a frequency of 9.5 MHz, but the basic principles of design and operation are extensible to other signal frequencies and greater numbers of channels. The prototype system consists mostly of three parts: An analog-to-digital-converter (ADC) board, which coherently digitizes the input signals in synchronism with the reference signal and performs some simple processing; A digital signal processor (DSP) in the form of a field-programmable gate array (FPGA) board, which performs most of the phase- and power-measurement computations on the digital samples generated by the ADC board; and A carrier board, which allows a personal computer to retrieve the phase and power data. The DSP contains four independent phase-only tracking loops, each of which tracks the phase of one of the preprocessed input signals relative to that of the reference signal (see figure). The phase values computed by these loops are averaged over intervals, the length of which is chosen to obtain output from the DSP at a desired rate. In addition, a simple sum of squares is computed for each channel as an estimate of the power of the signal in that channel. The relative phases and the power level estimates computed by the DSP could be used for diverse purposes in different settings. For example, if the input signals come from different elements of a phased-array antenna, the phases could be used as indications of the direction of arrival of a received signal and/or as feedback for electronic or mechanical beam steering. The power levels could be used as feedback for automatic gain control in preprocessing of incoming signals. For another example, the system could be used to measure the phases and power levels of outputs of multiple power amplifiers to enable adjustment of the amplifiers for optimal power combining.
Random vibration analysis of space flight hardware using NASTRAN
NASA Technical Reports Server (NTRS)
Thampi, S. K.; Vidyasagar, S. N.
1990-01-01
During liftoff and ascent flight phases, the Space Transportation System (STS) and payloads are exposed to the random acoustic environment produced by engine exhaust plumes and aerodynamic disturbances. The analysis of payloads for randomly fluctuating loads is usually carried out using the Miles' relationship. This approximation technique computes an equivalent load factor as a function of the natural frequency of the structure, the power spectral density of the excitation, and the magnification factor at resonance. Due to the assumptions inherent in Miles' equation, random load factors are often over-estimated by this approach. In such cases, the estimates can be refined using alternate techniques such as time domain simulations or frequency domain spectral analysis. Described here is the use of NASTRAN to compute more realistic random load factors through spectral analysis. The procedure is illustrated using Spacelab Life Sciences (SLS-1) payloads and certain unique features of this problem are described. The solutions are compared with Miles' results in order to establish trends at over or under prediction.
Estimation of Flood Discharges at Selected Recurrence Intervals for Streams in New Hampshire
Olson, Scott A.
2009-01-01
This report provides estimates of flood discharges at selected recurrence intervals for streamgages in and adjacent to New Hampshire and equations for estimating flood discharges at recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, and 500-years for ungaged, unregulated, rural streams in New Hampshire. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 117 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, mean April precipitation, percentage of wetland area, and main channel slope. The average standard error of prediction for estimating the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence interval flood discharges with these equations are 30.0, 30.8, 32.0, 34.2, 36.0, 38.1, and 43.4 percent, respectively. Flood discharges at selected recurrence intervals for selected streamgages were computed following the guidelines in Bulletin 17B of the U.S. Interagency Advisory Committee on Water Data. To determine the flood-discharge exceedence probabilities at streamgages in New Hampshire, a new generalized skew coefficient map covering the State was developed. The standard error of the data on new map is 0.298. To improve estimates of flood discharges at selected recurrence intervals for 20 streamgages with short-term records (10 to 15 years), record extension using the two-station comparison technique was applied. The two-station comparison method uses data from a streamgage with long-term record to adjust the frequency characteristics at a streamgage with a short-term record. A technique for adjusting a flood-discharge frequency curve computed from a streamgage record with results from the regression equations is described in this report. Also, a technique is described for estimating flood discharge at a selected recurrence interval for an ungaged site upstream or downstream from a streamgage using a drainage-area adjustment. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
Ahearn, Elizabeth A.
2009-01-01
A spring nor'easter affected the East Coast of the United States from April 15 to 18, 2007. In Connecticut, rainfall varied from 3 inches to more than 7 inches. The combined effects of heavy rainfall over a short duration, high winds, and high tides led to widespread flooding, storm damage, power outages, evacuations, and disruptions to traffic and commerce. The storm caused at least 18 fatalities (none in Connecticut). A Presidential Disaster Declaration was issued on May 11, 2007, for two counties in western Connecticut - Fairfield and Litchfield. This report documents hydrologic and meteorologic aspects of the April 2007 flood and includes estimates of the magnitude of the peak discharges and peak stages during the flood at 28 streamflow-gaging stations in western Connecticut. These data were used to perform flood-frequency analyses. Flood-frequency estimates provided in this report are expressed in terms of exceedance probabilities (the probability of a flood reaching or exceeding a particular magnitude in any year). Flood-frequency estimates for the 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, and 0.002 exceedance probabilities (also expressed as 50-, 20-, 10-, 4-, 2-, 1-, and 0.2- percent exceedance probability, respectively) were computed for 24 of the 28 streamflow-gaging stations. Exceedance probabilities can further be expressed in terms of recurrence intervals (2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence interval, respectively). Flood-frequency estimates computed in this study were compared to the flood-frequency estimates used to derive the water-surface profiles in previously published Federal Emergency Management Agency (FEMA) Flood Insurance Studies. The estimates in this report update and supersede previously published flood-frequency estimates for streamflowgaging stations in Connecticut by incorporating additional years of annual peak discharges, including the peaks for the April 2007 flood. In the southwest coastal region of Connecticut, the April 2007 peak discharges for streamflow-gaging stations with records extending back to 1955 were the second highest peak discharges on record; the 1955 annual peak discharges are the highest peak discharges in the station records. In the Housatonic and South Central Coast Basins, the April 2007 peak discharges for streamflow-gaging stations with records extending back to 1930 or earlier ranked between the fourth and eighth highest discharges on record, with the 1936, 1938, and 1955 floods as the largest floods in the station records. The peak discharges for the April 2007 flood have exceedance probabilities ranging between 0.10 to 0.02 (a 10- to 2-percent chance of being exceeded in a given year, respectively) with the majority (80 percent) of the stations having exceedance probabilities between 0.10 to 0.04. At three stations - Norwalk River at South Wilton, Pootatuck River at Sandy Hook, and Still River at Robertsville - the April 2007 peak discharges have an exceedance probability of 0.02. Flood-frequency estimates made after the April 2007 flood were compared to flood-frequency estimates used to derive the water-surface profiles (also called flood profiles) in FEMA Flood Insurance Studies developed for communities. In general, the comparison indicated that at the 0.10 exceedance probability (a 10-percent change of being exceeded in a given year), the discharges from the current (2007) flood-frequency analysis are larger than the discharges in the FEMA Flood Insurance Studies, with a median change of about +10 percent. In contrast, at the 0.01 exceedance probability (a 1-percent change of being exceeded in a year), the discharges from the current flood-frequency analysis are smaller than the discharges in the FEMA Flood Insurance Studies, with a median change of about -13 percent. Several stations had more than + 25 percent change in discharges at the 0.10 exceedance probability and are in the following communities: Winchester (Still River at Robertsv
Comparison of methods for the detection of gravitational waves from unknown neutron stars
NASA Astrophysics Data System (ADS)
Walsh, S.; Pitkin, M.; Oliver, M.; D'Antonio, S.; Dergachev, V.; Królak, A.; Astone, P.; Bejger, M.; Di Giovanni, M.; Dorosh, O.; Frasca, S.; Leaci, P.; Mastrogiovanni, S.; Miller, A.; Palomba, C.; Papa, M. A.; Piccinni, O. J.; Riles, K.; Sauter, O.; Sintes, A. M.
2016-12-01
Rapidly rotating neutron stars are promising sources of continuous gravitational wave radiation for the LIGO and Virgo interferometers. The majority of neutron stars in our galaxy have not been identified with electromagnetic observations. All-sky searches for isolated neutron stars offer the potential to detect gravitational waves from these unidentified sources. The parameter space of these blind all-sky searches, which also cover a large range of frequencies and frequency derivatives, presents a significant computational challenge. Different methods have been designed to perform these searches within acceptable computational limits. Here we describe the first benchmark in a project to compare the search methods currently available for the detection of unknown isolated neutron stars. The five methods compared here are individually referred to as the PowerFlux, sky Hough, frequency Hough, Einstein@Home, and time domain F -statistic methods. We employ a mock data challenge to compare the ability of each search method to recover signals simulated assuming a standard signal model. We find similar performance among the four quick-look search methods, while the more computationally intensive search method, Einstein@Home, achieves up to a factor of two higher sensitivity. We find that the absence of a second derivative frequency in the search parameter space does not degrade search sensitivity for signals with physically plausible second derivative frequencies. We also report on the parameter estimation accuracy of each search method, and the stability of the sensitivity in frequency and frequency derivative and in the presence of detector noise.
Security screening via computational imaging using frequency-diverse metasurface apertures
NASA Astrophysics Data System (ADS)
Smith, David R.; Reynolds, Matthew S.; Gollub, Jonah N.; Marks, Daniel L.; Imani, Mohammadreza F.; Yurduseven, Okan; Arnitz, Daniel; Pedross-Engel, Andreas; Sleasman, Timothy; Trofatter, Parker; Boyarsky, Michael; Rose, Alec; Odabasi, Hayrettin; Lipworth, Guy
2017-05-01
Computational imaging is a proven strategy for obtaining high-quality images with fast acquisition rates and simpler hardware. Metasurfaces provide exquisite control over electromagnetic fields, enabling the radiated field to be molded into unique patterns. The fusion of these two concepts can bring about revolutionary advances in the design of imaging systems for security screening. In the context of computational imaging, each field pattern serves as a single measurement of a scene; imaging a scene can then be interpreted as estimating the reflectivity distribution of a target from a set of measurements. As with any computational imaging system, the key challenge is to arrive at a minimal set of measurements from which a diffraction-limited image can be resolved. Here, we show that the information content of a frequency-diverse metasurface aperture can be maximized by design, and used to construct a complete millimeter-wave imaging system spanning a 2 m by 2 m area, consisting of 96 metasurfaces, capable of producing diffraction-limited images of human-scale targets. The metasurfacebased frequency-diverse system presented in this work represents an inexpensive, but tremendously flexible alternative to traditional hardware paradigms, offering the possibility of low-cost, real-time, and ubiquitous screening platforms.
PoMo: An Allele Frequency-Based Approach for Species Tree Estimation
De Maio, Nicola; Schrempf, Dominik; Kosiol, Carolin
2015-01-01
Incomplete lineage sorting can cause incongruencies of the overall species-level phylogenetic tree with the phylogenetic trees for individual genes or genomic segments. If these incongruencies are not accounted for, it is possible to incur several biases in species tree estimation. Here, we present a simple maximum likelihood approach that accounts for ancestral variation and incomplete lineage sorting. We use a POlymorphisms-aware phylogenetic MOdel (PoMo) that we have recently shown to efficiently estimate mutation rates and fixation biases from within and between-species variation data. We extend this model to perform efficient estimation of species trees. We test the performance of PoMo in several different scenarios of incomplete lineage sorting using simulations and compare it with existing methods both in accuracy and computational speed. In contrast to other approaches, our model does not use coalescent theory but is allele frequency based. We show that PoMo is well suited for genome-wide species tree estimation and that on such data it is more accurate than previous approaches. PMID:26209413
A Method for Rapid Measurement of Contrast Sensitivity on Mobile Touch-Screens
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2016-01-01
Touch-screen displays in cell phones and tablet computers are now pervasive, making them an attractive option for vision testing outside of the laboratory or clinic. Here we de- scribe a novel method in which subjects use a finger swipe to indicate the transition from visible to invisible on a grating which is swept in both contrast and frequency. Because a single image can be swiped in about a second, it is practical to use a series of images to zoom in on particular ranges of contrast or frequency, both to increase the accuracy of the measurements and to obtain an estimate of the reliability of the subject. Sensitivities to chromatic and spatio-temporal modulations are easily measured using the same method. A proto- type has been developed for Apple Computer's iPad/iPod/iPhone family of devices, implemented using an open-source scripting environment known as QuIP (QUick Image Processing, http://hsi.arc.nasa.gov/groups/scanpath/research.php). Preliminary data show good agreement with estimates obtained from traditional psychophysical methods as well as newer rapid estimation techniques. Issues relating to device calibration are also discussed.
Precession feature extraction of ballistic missile warhead with high velocity
NASA Astrophysics Data System (ADS)
Sun, Huixia
2018-04-01
This paper establishes the precession model of ballistic missile warhead, and derives the formulas of micro-Doppler frequency induced by the target with precession. In order to obtain micro-Doppler feature of ballistic missile warhead with precession, micro-Doppler bandwidth estimation algorithm, which avoids velocity compensation, is presented based on high-resolution time-frequency transform. The results of computer simulations confirm the effectiveness of the proposed method even with low signal-to-noise ratio.
Granata, Massimo; Craig, Kieran; Cagnoli, Gianpietro; Carcy, Cécile; Cunningham, William; Degallaix, Jérôme; Flaminio, Raffaele; Forest, Danièle; Hart, Martin; Hennig, Jan-Simon; Hough, James; MacLaren, Ian; Martin, Iain William; Michel, Christophe; Morgado, Nazario; Otmani, Salim; Pinard, Laurent; Rowan, Sheila
2013-12-15
We report on low-frequency measurements of the mechanical loss of a high-quality (transmissivity T<5 ppm at λ(0)=1064 nm, absorption loss <0.5 ppm) multilayer dielectric coating of ion-beam-sputtered fused silica and titanium-doped tantala in the 10-300 K temperature range. A useful parameter for the computation of coating thermal noise on different substrates is derived as a function of temperature and frequency.
Estimates of the solar internal angular velocity obtained with the Mt. Wilson 60-foot solar tower
NASA Technical Reports Server (NTRS)
Rhodes, Edward J., Jr.; Cacciani, Alessandro; Woodard, Martin; Tomczyk, Steven; Korzennik, Sylvain
1987-01-01
Estimates are obtained of the solar internal angular velocity from measurements of the frequency splittings of p-mode oscillations. A 16-day time series of full-disk Dopplergrams obtained during July and August 1984 at the 60-foot tower telescope of the Mt. Wilson Observatory is analyzed. Power spectra were computed for all of the zonal, tesseral, and sectoral p-modes from l = 0 to 89 and for all of the sectoral p-modes from l = 90 to 200. A mean power spectrum was calculated for each degree up to 89. The frequency differences of all of the different nonzonal modes were calculated for these mean power spectra.
Salinet, João L; Masca, Nicholas; Stafford, Peter J; Ng, G André; Schlindwein, Fernando S
2016-03-08
Areas with high frequency activity within the atrium are thought to be 'drivers' of the rhythm in patients with atrial fibrillation (AF) and ablation of these areas seems to be an effective therapy in eliminating DF gradient and restoring sinus rhythm. Clinical groups have applied the traditional FFT-based approach to generate the three-dimensional dominant frequency (3D DF) maps during electrophysiology (EP) procedures but literature is restricted on using alternative spectral estimation techniques that can have a better frequency resolution that FFT-based spectral estimation. Autoregressive (AR) model-based spectral estimation techniques, with emphasis on selection of appropriate sampling rate and AR model order, were implemented to generate high-density 3D DF maps of atrial electrograms (AEGs) in persistent atrial fibrillation (persAF). For each patient, 2048 simultaneous AEGs were recorded for 20.478 s-long segments in the left atrium (LA) and exported for analysis, together with their anatomical locations. After the DFs were identified using AR-based spectral estimation, they were colour coded to produce sequential 3D DF maps. These maps were systematically compared with maps found using the Fourier-based approach. 3D DF maps can be obtained using AR-based spectral estimation after AEGs downsampling (DS) and the resulting maps are very similar to those obtained using FFT-based spectral estimation (mean 90.23 %). There were no significant differences between AR techniques (p = 0.62). The processing time for AR-based approach was considerably shorter (from 5.44 to 5.05 s) when lower sampling frequencies and model order values were used. Higher levels of DS presented higher rates of DF agreement (sampling frequency of 37.5 Hz). We have demonstrated the feasibility of using AR spectral estimation methods for producing 3D DF maps and characterised their differences to the maps produced using the FFT technique, offering an alternative approach for 3D DF computation in human persAF studies.
Implementing a Digital Phasemeter in an FPGA
NASA Technical Reports Server (NTRS)
Rao, Shanti R.
2008-01-01
Firmware for implementing a digital phasemeter within a field-programmable gate array (FPGA) has been devised. In the original application of this firmware, the phase that one seeks to measure is the difference between the phases of two nominally-equal-frequency heterodyne signals generated by two interferometers. In that application, zero-crossing detectors convert the heterodyne signals to trains of rectangular pulses, the two pulse trains are fed to a fringe counter (the major part of the phasemeter) controlled by a clock signal having a frequency greater than the heterodyne frequency, and the fringe counter computes a time-averaged estimate of the difference between the phases of the two pulse trains. The firmware also does the following: Causes the FPGA to compute the frequencies of the input signals; Causes the FPGA to implement an Ethernet (or equivalent) transmitter for readout of phase and frequency values; and Provides data for use in diagnosis of communication failures. The readout rate can be set, by programming, to a value between 250 Hz and 1 kHz. Network addresses can be programmed by the user.
Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick; Klein, Vladislav
2011-01-01
Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.
Deurenberg, P; Andreoli, A; de Lorenzo, A
1996-01-01
Total body water and extracellular water were measured by deuterium oxide and bromide dilution respectively in 23 healthy males and 25 healthy females. In addition, total body impedance was measured at 17 frequencies, ranging from 1 kHz to 1350 kHz. Modelling programs were used to extrapolate impedance values to frequency zero (extracellular resistance) and frequency infinity (total body water resistance). Impedance indexes (height2/Zf) were computed at all 17 frequencies. The estimation errors of extracellular resistance and total body water resistance were 1% and 3%, respectively. Impedance and impedance index at low frequency were correlated with extracellular water, independent of the amount of total body water. Total body water showed the greatest correlation with impedance and impedance index at high frequencies. Extrapolated impedance values did not show a higher correlation compared to measured values. Prediction formulas from the literature applied to fixed frequencies showed the best mean and individual predictions for both extracellular water and total body water. It is concluded that, at least in healthy individuals with normal body water distribution, modelling impedance data has no advantage over impedance values measured at fixed frequencies, probably due to estimation errors in the modelled data.
Low-flow characteristics of Indiana streams
Fowler, K.K.; Wilson, J.T.
1996-01-01
Knowledge of low-flow characteristics of streams is essential for management of water resources. Low-flow characteristics are presented for 229 continuous-record, streamflow-gaging stations and 285 partial-record stations in Indiana. Low- flow-frequency characteristics were computed for 210 continuous-record stations that had at least 10 years of record, and flow-duration curves were computed for all continuous-record stations. Low-flow-frequency and flow-duration analyses are based on available streamflow records through September 1993. Selected low-flow-frequency curves were computed for annual low flows and seasonal low flows. The four seasons are represented by the 3-month groups of March-May, June-August, September-November, and December- February. The 7-day, 10-year and the 7-day, 2 year low flows were estimated for 285 partial-record stations, which are ungaged sites where streamflow measurements were made at base flow. The same low-flow characteristics were estimated for 19 continuous-record stations where less than 10 years of record were available. Precipitation and geology directly influence the streams in Indiana. Streams in the northern, glaciated part of the State tend to have higher sustained base flows than those in the nonglaciated southern part. Flow at several of the continuous-record gaging stations is affected by some form of regulation or diversion. Low-flow characteristics for continuous-record stations at which flow is affected by regulation are determined using the period of record affected by regulation; natural flows prior to regulation are not used.
A comparison of the wavelet and short-time fourier transforms for Doppler spectral analysis.
Zhang, Yufeng; Guo, Zhenyu; Wang, Weilian; He, Side; Lee, Ting; Loew, Murray
2003-09-01
Doppler spectrum analysis provides a non-invasive means to measure blood flow velocity and to diagnose arterial occlusive disease. The time-frequency representation of the Doppler blood flow signal is normally computed by using the short-time Fourier transform (STFT). This transform requires stationarity of the signal during a finite time interval, and thus imposes some constraints on the representation estimate. In addition, the STFT has a fixed time-frequency window, making it inaccurate to analyze signals having relatively wide bandwidths that change rapidly with time. In the present study, wavelet transform (WT), having a flexible time-frequency window, was used to investigate its advantages and limitations for the analysis of the Doppler blood flow signal. Representations computed using the WT with a modified Morlet wavelet were investigated and compared with the theoretical representation and those computed using the STFT with a Gaussian window. The time and frequency resolutions of these two approaches were compared. Three indices, the normalized root-mean-squared errors of the minimum, the maximum and the mean frequency waveforms, were used to evaluate the performance of the WT. Results showed that the WT can not only be used as an alternative signal processing tool to the STFT for Doppler blood flow signals, but can also generate a time-frequency representation with better resolution than the STFT. In addition, the WT method can provide both satisfactory mean frequencies and maximum frequencies. This technique is expected to be useful for the analysis of Doppler blood flow signals to quantify arterial stenoses.
Multivariable frequency domain identification via 2-norm minimization
NASA Technical Reports Server (NTRS)
Bayard, David S.
1992-01-01
The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.
Computational expressions for signals in frequency-modulation spectroscopy.
Di Rosa, Michael D; Reiten, M T
2015-06-01
General expressions for the signals in frequency-modulation spectroscopy (FMS) appear in the literature but are often reduced to simple analytical equations following the assumption of a weak modulation index. This is little help to the experimentalist who wants to predict signals for modulation depths of the order of unity or greater, where strong FMS signals reside. Here, we develop general formulas for FMS signals in the case of an absorber with a Voigt line shape and then link these expressions to an example and existing numerical code for the line shape. The resulting computational recipe is easy to implement and exercised here to show where the larger FMS signals are found over the coordinates of modulation index and modulation frequency. One can also estimate from provided curves the in-phase FMS signal over a wide range of modulation parameters at either the Lorentzian-broadening or Doppler-broadening limit, or anywhere in between by interpolation.
NASA Astrophysics Data System (ADS)
Hasan, Mohammed A.
1997-11-01
In this dissertation, we present several novel approaches for detection and identification of targets of arbitrary shapes from the acoustic backscattered data and using the incident waveform. This problem is formulated as time- delay estimation and sinusoidal frequency estimation problems which both have applications in many other important areas in signal processing. Solving time-delay estimation problem allows the identification of the specular components in the backscattered signal from elastic and non-elastic targets. Thus, accurate estimation of these time delays would help in determining the existence of certain clues for detecting targets. Several new methods for solving these two problems in the time, frequency and wavelet domains are developed. In the time domain, a new block fast transversal filter (BFTF) is proposed for a fast implementation of the least squares (LS) method. This BFTF algorithm is derived by using data-related constrained block-LS cost function to guarantee global optimality. The new soft-constrained algorithm provides an efficient way of transferring weight information between blocks of data and thus it is computationally very efficient compared with other LS- based schemes. Additionally, the tracking ability of the algorithm can be controlled by varying the block length and/or a soft constrained parameter. The effectiveness of this algorithm is tested on several underwater acoustic backscattered data for elastic targets and non-elastic (cement chunk) objects. In the frequency domain, the time-delay estimation problem is converted to a sinusoidal frequency estimation problem by using the discrete Fourier transform. Then, the lagged sample covariance matrices of the resulting signal are computed and studied in terms of their eigen- structure. These matrices are shown to be robust and effective in extracting bases for the signal and noise subspaces. New MUSIC and matrix pencil-based methods are derived these subspaces. The effectiveness of the method is demonstrated on the problem of detection of multiple specular components in the acoustic backscattered data. Finally, a method for the estimation of time delays using wavelet decomposition is derived. The sub-band adaptive filtering uses discrete wavelet transform for multi- resolution or sub-band decomposition. Joint time delay estimation for identifying multi-specular components and subsequent adaptive filtering processes are performed on the signal in each sub-band. This would provide multiple 'look' of the signal at different resolution scale which results in more accurate estimates for delays associated with the specular components. Simulation results on the simulated and real shallow water data are provided which show the promise of this new scheme for target detection in a heavy cluttered environment.
A novel cost-effective parallel narrowband ANC system with local secondary-path estimation
NASA Astrophysics Data System (ADS)
Delegà, Riccardo; Bernasconi, Giancarlo; Piroddi, Luigi
2017-08-01
Many noise reduction applications are targeted at multi-tonal disturbances. Active noise control (ANC) solutions for such problems are generally based on the combination of multiple adaptive notch filters. Both the performance and the computational cost are negatively affected by an increase in the number of controlled frequencies. In this work we study a different modeling approach for the secondary path, based on the estimation of various small local models in adjacent frequency subbands, that greatly reduces the impact of reference-filtering operations in the ANC algorithm. Furthermore, in combination with a frequency-specific step size tuning method it provides a balanced attenuation performance over the whole controlled frequency range (and particularly in the high end of the range). Finally, the use of small local models is greatly beneficial for the reactivity of the online secondary path modeling algorithm when the characteristics of the acoustic channels are time-varying. Several simulations are provided to illustrate the positive features of the proposed method compared to other well-known techniques.
Alonso-Valerdi, Luz María
2016-01-01
A brain-computer interface (BCI) aims to establish communication between the human brain and a computing system so as to enable the interaction between an individual and his environment without using the brain output pathways. Individuals control a BCI system by modulating their brain signals through mental tasks (e.g., motor imagery or mental calculation) or sensory stimulation (e.g., auditory, visual, or tactile). As users modulate their brain signals at different frequencies and at different levels, the appropriate characterization of those signals is necessary. The modulation of brain signals through mental tasks is furthermore a skill that requires training. Unfortunately, not all the users acquire such skill. A practical solution to this problem is to assess the user probability of controlling a BCI system. Another possible solution is to set the bandwidth of the brain oscillations, which is highly sensitive to the users' age, sex and anatomy. With this in mind, NeuroIndex, a Python executable script, estimates a neurophysiological prediction index and the individual alpha frequency (IAF) of the user in question. These two parameters are useful to characterize the user EEG signals, and decide how to go through the complex process of adapting the human brain and the computing system on the basis of previously proposed methods. NeuroIndeX is not only the implementation of those methods, but it also complements the methods each other and provides an alternative way to obtain the prediction parameter. However, an important limitation of this application is its dependency on the IAF value, and some results should be interpreted with caution. The script along with some electroencephalographic datasets are available on a GitHub repository in order to corroborate the functionality and usability of this application.
Alonso-Valerdi, Luz María
2016-01-01
A brain-computer interface (BCI) aims to establish communication between the human brain and a computing system so as to enable the interaction between an individual and his environment without using the brain output pathways. Individuals control a BCI system by modulating their brain signals through mental tasks (e.g., motor imagery or mental calculation) or sensory stimulation (e.g., auditory, visual, or tactile). As users modulate their brain signals at different frequencies and at different levels, the appropriate characterization of those signals is necessary. The modulation of brain signals through mental tasks is furthermore a skill that requires training. Unfortunately, not all the users acquire such skill. A practical solution to this problem is to assess the user probability of controlling a BCI system. Another possible solution is to set the bandwidth of the brain oscillations, which is highly sensitive to the users' age, sex and anatomy. With this in mind, NeuroIndex, a Python executable script, estimates a neurophysiological prediction index and the individual alpha frequency (IAF) of the user in question. These two parameters are useful to characterize the user EEG signals, and decide how to go through the complex process of adapting the human brain and the computing system on the basis of previously proposed methods. NeuroIndeX is not only the implementation of those methods, but it also complements the methods each other and provides an alternative way to obtain the prediction parameter. However, an important limitation of this application is its dependency on the IAF value, and some results should be interpreted with caution. The script along with some electroencephalographic datasets are available on a GitHub repository in order to corroborate the functionality and usability of this application. PMID:27445783
Seismpol_ a visual-basic computer program for interactive and automatic earthquake waveform analysis
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio
1997-11-01
A Microsoft Visual-Basic computer program for waveform analysis of seismic signals is presented. The program combines interactive and automatic processing of digital signals using data recorded by three-component seismic stations. The analysis procedure can be used in either an interactive earthquake analysis or an automatic on-line processing of seismic recordings. The algorithm works in the time domain using the Covariance Matrix Decomposition method (CMD), so that polarization characteristics may be computed continuously in real time and seismic phases can be identified and discriminated. Visual inspection of the particle motion in hortogonal planes of projection (hodograms) reduces the danger of misinterpretation derived from the application of the polarization filter. The choice of time window and frequency intervals improves the quality of the extracted polarization information. In fact, the program uses a band-pass Butterworth filter to process the signals in the frequency domain by analysis of a selected signal window into a series of narrow frequency bands. Significant results supported by well defined polarizations and source azimuth estimates for P and S phases are also obtained for short-period seismic events (local microearthquakes).
Meta-analysis of the effect of natural frequencies on Bayesian reasoning.
McDowell, Michelle; Jacobs, Perke
2017-12-01
The natural frequency facilitation effect describes the finding that people are better able to solve descriptive Bayesian inference tasks when represented as joint frequencies obtained through natural sampling, known as natural frequencies, than as conditional probabilities. The present meta-analysis reviews 20 years of research seeking to address when, why, and for whom natural frequency formats are most effective. We review contributions from research associated with the 2 dominant theoretical perspectives, the ecological rationality framework and nested-sets theory, and test potential moderators of the effect. A systematic review of relevant literature yielded 35 articles representing 226 performance estimates. These estimates were statistically integrated using a bivariate mixed-effects model that yields summary estimates of average performances across the 2 formats and estimates of the effects of different study characteristics on performance. These study characteristics range from moderators representing individual characteristics (e.g., numeracy, expertise), to methodological differences (e.g., use of incentives, scoring criteria) and features of problem representation (e.g., short menu format, visual aid). Short menu formats (less computationally complex representations showing joint-events) and visual aids demonstrated some of the strongest moderation effects, improving performance for both conditional probability and natural frequency formats. A number of methodological factors (e.g., exposure to both problem formats) were also found to affect performance rates, emphasizing the importance of a systematic approach. We suggest how research on Bayesian reasoning can be strengthened by broadening the definition of successful Bayesian reasoning to incorporate choice and process and by applying different research methodologies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Brain-computer interface for alertness estimation and improving
NASA Astrophysics Data System (ADS)
Hramov, Alexander; Maksimenko, Vladimir; Hramova, Marina
2018-02-01
Using wavelet analysis of the signals of electrical brain activity (EEG), we study the processes of neural activity, associated with perception of visual stimuli. We demonstrate that the brain can process visual stimuli in two scenarios: (i) perception is characterized by destruction of the alpha-waves and increase in the high-frequency (beta) activity, (ii) the beta-rhythm is not well pronounced, while the alpha-wave energy remains unchanged. The special experiments show that the motivation factor initiates the first scenario, explained by the increasing alertness. Based on the obtained results we build the brain-computer interface and demonstrate how the degree of the alertness can be estimated and controlled in real experiment.
Structural Inference in the Art of Violin Making.
NASA Astrophysics Data System (ADS)
Morse-Fortier, Leonard Joseph
The "secrets" of success of early Italian violins have long been sought. Among their many efforts to reproduce the results of Stradiveri, Guarneri, and Amati, luthiers have attempted to order and match natural resonant frequencies in the free violin plates. This tap-tone plate tuning technique is simply an eigenvalue extraction scheme. In the final stages of carving, the violin maker complements considerable intuitive knowledge of violin plate structure and of modal attributes with tap-tone frequency estimates to better understand plate structure and to inform decisions about plate carving and completeness. Examining the modal attributes of violin plates, this work develops and incorporates an impulse-response scheme for modal inference, measures resonant frequencies and modeshapes for a pair of violin plates, and presents modeshapes through a unique computer visualization scheme developed specifically for this purpose. The work explores, through simple examples questions of how plate modal attributes reflect underlying structure, and questions about the so -called evolution of modeshapes and frequencies through assembly of the violin. Separately, the work develops computer code for a carved, anisotropic, plate/shell finite element. Solutions are found to the static displacement and free-vibration eigenvalue problems for an orthotropic plate, and used to verify element accuracy. Finally, a violin back plate is modelled with full consideration of plate thickness and arching. Model estimates for modal attributes compare very well against experimentally acquired values. Finally, the modal synthesis technique is applied to predicting the modal attributes of the violin top plate with ribs attached from those of the top plate alone, and with an estimate of rib mass and stiffness. This last analysis serves to verify the modal synthesis method, and to quantify its limits of applicability in attempting to solve problems with severe structural modification. Conclusions emphasize the importance of better understanding the underlying structure, improved understanding of its relationship to modal attributes, and better estimates of wood elasticity.
Maximum likelihood estimation of linkage disequilibrium in half-sib families.
Gomez-Raya, L
2012-05-01
Maximum likelihood methods for the estimation of linkage disequilibrium between biallelic DNA-markers in half-sib families (half-sib method) are developed for single and multifamily situations. Monte Carlo computer simulations were carried out for a variety of scenarios regarding sire genotypes, linkage disequilibrium, recombination fraction, family size, and number of families. A double heterozygote sire was simulated with recombination fraction of 0.00, linkage disequilibrium among dams of δ=0.10, and alleles at both markers segregating at intermediate frequencies for a family size of 500. The average estimates of δ were 0.17, 0.25, and 0.10 for Excoffier and Slatkin (1995), maternal informative haplotypes, and the half-sib method, respectively. A multifamily EM algorithm was tested at intermediate frequencies by computer simulation. The range of the absolute difference between estimated and simulated δ was between 0.000 and 0.008. A cattle half-sib family was genotyped with the Illumina 50K BeadChip. There were 314,730 SNP pairs for which the sire was a homo-heterozygote with average estimates of r2 of 0.115, 0.067, and 0.111 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. There were 208,872 SNP pairs for which the sire was double heterozygote with average estimates of r2 across the genome of 0.100, 0.267, and 0.925 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. Genome analyses for all possible sire genotypes with 829,042 tests showed that ignoring half-sib family structure leads to upward biased estimates of linkage disequilibrium. Published inferences on population structure and evolution of cattle should be revisited after accommodating existing half-sib family structure in the estimation of linkage disequilibrium.
Maximum Likelihood Estimation of Linkage Disequilibrium in Half-Sib Families
Gomez-Raya, L.
2012-01-01
Maximum likelihood methods for the estimation of linkage disequilibrium between biallelic DNA-markers in half-sib families (half-sib method) are developed for single and multifamily situations. Monte Carlo computer simulations were carried out for a variety of scenarios regarding sire genotypes, linkage disequilibrium, recombination fraction, family size, and number of families. A double heterozygote sire was simulated with recombination fraction of 0.00, linkage disequilibrium among dams of δ = 0.10, and alleles at both markers segregating at intermediate frequencies for a family size of 500. The average estimates of δ were 0.17, 0.25, and 0.10 for Excoffier and Slatkin (1995), maternal informative haplotypes, and the half-sib method, respectively. A multifamily EM algorithm was tested at intermediate frequencies by computer simulation. The range of the absolute difference between estimated and simulated δ was between 0.000 and 0.008. A cattle half-sib family was genotyped with the Illumina 50K BeadChip. There were 314,730 SNP pairs for which the sire was a homo-heterozygote with average estimates of r2 of 0.115, 0.067, and 0.111 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. There were 208,872 SNP pairs for which the sire was double heterozygote with average estimates of r2 across the genome of 0.100, 0.267, and 0.925 for half-sib, Excoffier and Slatkin (1995), and maternal informative haplotypes methods, respectively. Genome analyses for all possible sire genotypes with 829,042 tests showed that ignoring half-sib family structure leads to upward biased estimates of linkage disequilibrium. Published inferences on population structure and evolution of cattle should be revisited after accommodating existing half-sib family structure in the estimation of linkage disequilibrium. PMID:22377635
Estimation of oxygen consumption during cycling and rowing.
Baig, Dur-e-Zehra; Savkin, Andrey V; Celler, Branko G
2012-01-01
The aim of this paper is to develop estimator that can predict oxygen consumption (V(O2)) during cycling and rowing exercises, by using non-invasive and easily measurable quantities such as heart rate (HR), respiratory rate (RespR) and frequency of exercising activity. The frequency of exercise is quantified as a universal measure of exercise intensity and is known as Exercise Rate (ER). This ER is responsible for deviation in V(O2) (ΔV(O2)), HR (ΔHR), and RespR (ΔRespR) from their respective baseline measurements during exercise. Therefore, ΔV(O2) can be estimated from Δ, ΔRespR and ER. The resting measured of V(O2) is referred as V(O(2rest)); this is computed from the physical fitness of an individual. The Hammerstein model is adopted for the estimation of ΔV(O2). Results in this study demonstrate that the developed estimators for each type of exercise are capable of estimating V(O2) by adding up V(O(2rest)) and ΔV(O2) at various intensities during cycling and rowing.
Magnitude and frequency of floods in Washington
Cummans, J.E.; Collings, Michael R.; Nasser, Edmund George
1975-01-01
Relations are provided to estimate the magnitude and frequency of floods on Washington streams. Annual-peak-flow data from stream gaging stations on unregulated streams having 1 years or more of record were used to determine a log-Pearson Type III frequency curve for each station. Flood magnitudes having recurrence intervals of 2, 5, i0, 25, 50, and 10years were then related to physical and climatic indices of the drainage basins by multiple-regression analysis using the Biomedical Computer Program BMDO2R. These regression relations are useful for estimating flood magnitudes of the specified recurrence intervals at ungaged or short-record sites. Separate sets of regression equations were defined for western and eastern parts of the State, and the State was further subdivided into 12 regions in which the annual floods exhibit similar flood characteristics. Peak flows are related most significantly in western Washington to drainage-area size and mean annual precipitation. In eastern Washington-they are related most significantly to drainage-area size, mean annual precipitation, and percentage of forest cover. Standard errors of estimate of the estimating relations range from 25 to 129 percent, and the smallest errors are generally associated with the more humid regions.
Determining XV-15 aeroelastic modes from flight data with frequency-domain methods
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.; Tischler, Mark B.
1993-01-01
The XV-15 tilt-rotor wing has six major aeroelastic modes that are close in frequency. To precisely excite individual modes during flight test, dual flaperon exciters with automatic frequency-sweep controls were installed. The resulting structural data were analyzed in the frequency domain (Fourier transformed). All spectral data were computed using chirp z-transforms. Modal frequencies and damping were determined by fitting curves to frequency-response magnitude and phase data. The results given in this report are for the XV-15 with its original metal rotor blades. Also, frequency and damping values are compared with theoretical predictions made using two different programs, CAMRAD and ASAP. The frequency-domain data-analysis method proved to be very reliable and adequate for tracking aeroelastic modes during flight-envelope expansion. This approach required less flight-test time and yielded mode estimations that were more repeatable, compared with the exponential-decay method previously used.
Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook
2007-03-01
To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.
Population genetics inference for longitudinally-sampled mutants under strong selection.
Lacerda, Miguel; Seoighe, Cathal
2014-11-01
Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.
Sensitivity of Spacebased Microwave Radiometer Observations to Ocean Surface Evaporation
NASA Technical Reports Server (NTRS)
Liu, Timothy W.; Li, Li
2000-01-01
Ocean surface evaporation and the latent heat it carries are the major components of the hydrologic and thermal forcing on the global oceans. However, there is practically no direct in situ measurements. Evaporation estimated from bulk parameterization methods depends on the quality and distribution of volunteer-ship reports which are far less than satisfactory. The only way to monitor evaporation with sufficient temporal and spatial resolutions to study global environment changes is by spaceborne sensors. The estimation of seasonal-to-interannual variation of ocean evaporation, using spacebased measurements of wind speed, sea surface temperature (SST), and integrated water vapor, through bulk parameterization method,s was achieved with reasonable success over most of the global ocean, in the past decade. Because all the three geophysical parameters can be retrieved from the radiance at the frequencies measured by the Scanning Multichannel Microwave Radiometer (SMMR) on Nimbus-7, the feasibility of retrieving evaporation directly from the measured radiance was suggested and demonstrated using coincident brightness temperatures observed by SMMR and latent heat flux computed from ship data, in the monthly time scale. However, the operational microwave radiometers that followed SMMR, the Special Sensor Microwave/Imager (SSM/I), lack the low frequency channels which are sensitive to SST. This low frequency channels are again included in the microwave imager (TMI) of the recently launched Tropical Rain Measuring Mission (TRMM). The radiance at the frequencies observed by both TMI and SSM/I were simulated through an atmospheric radiative transfer model using ocean surface parameters and atmospheric temperature and humidity profiles produced by the reanalysis of the European Center for Medium Range Weather Forecast (ECMWF). From the same ECMWF data set, coincident evaporation is computed using a surface layer turbulent transfer model. The sensitivity of the radiance to evaporation over various seasons and geographic locations are examined. The microwave frequencies with radiance that are significant correlated with evaporation are identify and capability of estimating evaporation directly from TMI will be discussed.
Detecting Nano-Scale Vibrations in Rotating Devices by Using Advanced Computational Methods
del Toro, Raúl M.; Haber, Rodolfo E.; Schmittdiel, Michael C.
2010-01-01
This paper presents a computational method for detecting vibrations related to eccentricity in ultra precision rotation devices used for nano-scale manufacturing. The vibration is indirectly measured via a frequency domain analysis of the signal from a piezoelectric sensor attached to the stationary component of the rotating device. The algorithm searches for particular harmonic sequences associated with the eccentricity of the device rotation axis. The detected sequence is quantified and serves as input to a regression model that estimates the eccentricity. A case study presents the application of the computational algorithm during precision manufacturing processes. PMID:22399918
On splice site prediction using weight array models: a comparison of smoothing techniques
NASA Astrophysics Data System (ADS)
Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard
2007-11-01
In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.
The measurement of linear frequency drift in oscillators
NASA Astrophysics Data System (ADS)
Barnes, J. A.
1985-04-01
A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.
User's Manual for Program PeakFQ, Annual Flood-Frequency Analysis Using Bulletin 17B Guidelines
Flynn, Kathleen M.; Kirby, William H.; Hummel, Paul R.
2006-01-01
Estimates of flood flows having given recurrence intervals or probabilities of exceedance are needed for design of hydraulic structures and floodplain management. Program PeakFQ provides estimates of instantaneous annual-maximum peak flows having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (annual-exceedance probabilities of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002, respectively). As implemented in program PeakFQ, the Pearson Type III frequency distribution is fit to the logarithms of instantaneous annual peak flows following Bulletin 17B guidelines of the Interagency Advisory Committee on Water Data. The parameters of the Pearson Type III frequency curve are estimated by the logarithmic sample moments (mean, standard deviation, and coefficient of skewness), with adjustments for low outliers, high outliers, historic peaks, and generalized skew. This documentation provides an overview of the computational procedures in program PeakFQ, provides a description of the program menus, and provides an example of the output from the program.
Multi-ball and one-ball geolocation and location verification
NASA Astrophysics Data System (ADS)
Nelson, D. J.; Townsend, J. L.
2017-05-01
We present analysis methods that may be used to geolocate emitters using one or more moving receivers. While some of the methods we present may apply to a broader class of signals, our primary interest is locating and tracking ships from short pulsed transmissions, such as the maritime Automatic Identification System (AIS.) The AIS signal is difficult to process and track since the pulse duration is only 25 milliseconds, and the pulses may only be transmitted every six to ten seconds. Several fundamental problems are addressed, including demodulation of AIS/GMSK signals, verification of the emitter location, accurate frequency and delay estimation and identification of pulse trains from the same emitter. In particular, we present several new correlation methods, including cross-cross correlation that greatly improves correlation accuracy over conventional methods and cross- TDOA and cross-FDOA functions that make it possible to estimate time and frequency delay without the need of computing a two dimensional cross-ambiguity surface. By isolating pulses from the same emitter and accurately tracking the received signal frequency, we are able to accurately estimate the emitter location from the received Doppler characteristics.
Ming, Y; Peiwen, Q
2001-03-01
The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.
NASA Astrophysics Data System (ADS)
Fan, Tong-liang; Wen, Yu-cang; Kadri, Chaibou
Orthogonal frequency-division multiplexing (OFDM) is robust against frequency selective fading because of the increase of the symbol duration. However, the time-varying nature of the channel causes inter-carrier interference (ICI) which destroys the orthogonal of sub-carriers and degrades the system performance severely. To alleviate the detrimental effect of ICI, there is a need for ICI mitigation within one OFDM symbol. We propose an iterative Inter-Carrier Interference (ICI) estimation and cancellation technique for OFDM systems based on regularized constrained total least squares. In the proposed scheme, ICI aren't treated as additional additive white Gaussian noise (AWGN). The effect of Inter-Carrier Interference (ICI) and inter-symbol interference (ISI) on channel estimation is regarded as perturbation of channel. We propose a novel algorithm for channel estimation o based on regularized constrained total least squares. Computer simulations show that significant improvement can be obtained by the proposed scheme in fast fading channels.
Wideband Direction of Arrival Estimation in the Presence of Unknown Mutual Coupling
Li, Weixing; Zhang, Yue; Lin, Jianzhi; Guo, Rui; Chen, Zengping
2017-01-01
This paper investigates a subarray based algorithm for direction of arrival (DOA) estimation of wideband uniform linear array (ULA), under the presence of frequency-dependent mutual coupling effects. Based on the Toeplitz structure of mutual coupling matrices, the whole array is divided into the middle subarray and the auxiliary subarray. Then two-sided correlation transformation is applied to the correlation matrix of the middle subarray instead of the whole array. In this way, the mutual coupling effects can be eliminated. Finally, the multiple signal classification (MUSIC) method is utilized to derive the DOAs. For the condition when the blind angles exist, we refine DOA estimation by using a simple approach based on the frequency-dependent mutual coupling matrixes (MCMs). The proposed method can achieve high estimation accuracy without any calibration sources. It has a low computational complexity because iterative processing is not required. Simulation results validate the effectiveness and feasibility of the proposed algorithm. PMID:28178177
Image Processing and Computer Aided Diagnosis in Computed Tomography of the Breast
2007-10-01
Brian Harrawood, Ronald Pedroni, Alexander Crowell, Robert Macri, Mathew Kiser, Richard Walter ,Werner 111 Tornow , Neutron Stimulated Emission...1( kkkk k nn kkk n k n k w PBbywbb σσσ += +−⋅+=+ , (2) MLE estimate is known to increase high frequency image noise. To overcome this, some...contrast to noise ratio results for the three images shown in Figure 5. With grid w /o grid w /o grid; scatter reduction RSF 11% 45% 10% CNR 7.04 6.99
Algorithms for the detection of chewing behavior in dietary monitoring applications
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Helal, Abdelsalam; Mendez-Vasquez, Andres
2009-08-01
The detection of food consumption is key to the implementation of successful behavior modification in support of dietary monitoring and therapy, for example, during the course of controlling obesity, diabetes, or cardiovascular disease. Since the vast majority of humans consume food via mastication (chewing), we have designed an algorithm that automatically detects chewing behaviors in surveillance video of a person eating. Our algorithm first detects the mouth region, then computes the spatiotemporal frequency spectrum of a small perioral region (including the mouth). Spectral data are analyzed to determine the presence of periodic motion that characterizes chewing. A classifier is then applied to discriminate different types of chewing behaviors. Our algorithm was tested on seven volunteers, whose behaviors included chewing with mouth open, chewing with mouth closed, talking, static face presentation (control case), and moving face presentation. Early test results show that the chewing behaviors induce a temporal frequency peak at 0.5Hz to 2.5Hz, which is readily detected using a distance-based classifier. Computational cost is analyzed for implementation on embedded processing nodes, for example, in a healthcare sensor network. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm, and its estimated error. It is shown that chewing detection is possible within a computationally efficient, accurate, and subject-independent framework.
Power allocation and range performance considerations for a dual-frequency EBPSK/MPPSK system
NASA Astrophysics Data System (ADS)
Yao, Yu; Wu, Lenan; Zhao, Junhui
2017-12-01
Extended binary phase shift keying/M-ary position phase shift keying (EBPSK/MPPSK)-MODEM provides radar and communication functions on a single hardware platform with a single waveform. However, its range estimation accuracy is worse than continuous-wave (CW) radar because of the imbalance of power in two carrier frequencies. In this article, the power allocation method for dual-frequency EBPSK/MPPSK modulated systems is presented. The power of two signal transmitters is adequately allocated to ensure that the power in two carrier frequencies is equal. The power allocation ratios for two types of modulation systems are obtained. Moreover, considerations regarding the range of operation of the dual-frequency system are analysed. In addition to theoretical considerations, computer simulations are provided to illustrate the performance.
Weaver, J. Curtis; Feaster, Toby D.; Gotvald, Anthony J.
2009-01-01
Reliable estimates of the magnitude and frequency of floods are required for the economical and safe design of transportation and water-conveyance structures. A multistate approach was used to update methods for estimating the magnitude and frequency of floods in rural, ungaged basins in North Carolina, South Carolina, and Georgia that are not substantially affected by regulation, tidal fluctuations, or urban development. In North Carolina, annual peak-flow data available through September 2006 were available for 584 sites; 402 of these sites had a total of 10 or more years of systematic record that is required for at-site, flood-frequency analysis. Following data reviews and the computation of 20 physical and climatic basin characteristics for each station as well as at-site flood-frequency statistics, annual peak-flow data were identified for 363 sites in North Carolina suitable for use in this analysis. Among these 363 sites, 19 sites had records that could be divided into unregulated and regulated/ channelized annual peak discharges, which means peak-flow records were identified for a total of 382 cases in North Carolina. Considering the 382 cases, at-site flood-frequency statistics are provided for 333 unregulated cases (also used for the regression database) and 49 regulated/channelized cases. The flood-frequency statistics for the 333 unregulated sites were combined with data for sites from South Carolina, Georgia, and adjacent parts of Alabama, Florida, Tennessee, and Virginia to create a database of 943 sites considered for use in the regional regression analysis. Flood-frequency statistics were computed by fitting logarithms (base 10) of the annual peak flows to a log-Pearson Type III distribution. As part of the computation process, a new generalized skew coefficient was developed by using a Bayesian generalized least-squares regression model. Exploratory regression analyses using ordinary least-squares regression completed on the initial database of 943 sites resulted in defining five hydrologic regions for North Carolina, South Carolina, and Georgia. Stations with drainage areas less than 1 square mile were removed from the database, and a procedure to examine for basin redundancy (based on drainage area and periods of record) also resulted in the removal of some stations from the regression database. Flood-frequency estimates and basin characteristics for 828 gaged stations were combined to form the final database that was used in the regional regression analysis. Regional regression analysis, using generalized least-squares regression, was used to develop a set of predictive equations that can be used for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent chance exceedance flows for rural ungaged, basins in North Carolina, South Carolina, and Georgia. The final predictive equations are all functions of drainage area and the percentage of drainage basin within each of the five hydrologic regions. Average errors of prediction for these regression equations range from 34.0 to 47.7 percent. Discharge estimates determined from the systematic records for the current study are, on average, larger in magnitude than those from a previous study for the highest percent chance exceedances (50 and 20 percent) and tend to be smaller than those from the previous study for the lower percent chance exceedances when all sites are considered as a group. For example, mean differences for sites in the Piedmont hydrologic region range from positive 0.5 percent for the 50-percent chance exceedance flow to negative 4.6 percent for the 0.2-percent chance exceedance flow when stations are grouped by hydrologic region. Similarly for the same hydrologic region, median differences range from positive 0.9 percent for the 50-percent chance exceedance flow to negative 7.1 percent for the 0.2-percent chance exceedance flow. However, mean and median percentage differences between the estimates from the previous and curre
Computer considerations for real time simulation of a generalized rotor model
NASA Technical Reports Server (NTRS)
Howe, R. M.; Fogarty, L. E.
1977-01-01
Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.
Asquith, William H.; Roussel, Meghan C.
2009-01-01
Annual peak-streamflow frequency estimates are needed for flood-plain management; for objective assessment of flood risk; for cost-effective design of dams, levees, and other flood-control structures; and for design of roads, bridges, and culverts. Annual peak-streamflow frequency represents the peak streamflow for nine recurrence intervals of 2, 5, 10, 25, 50, 100, 200, 250, and 500 years. Common methods for estimation of peak-streamflow frequency for ungaged or unmonitored watersheds are regression equations for each recurrence interval developed for one or more regions; such regional equations are the subject of this report. The method is based on analysis of annual peak-streamflow data from U.S. Geological Survey streamflow-gaging stations (stations). Beginning in 2007, the U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, began a 3-year investigation concerning the development of regional equations to estimate annual peak-streamflow frequency for undeveloped watersheds in Texas. The investigation focuses primarily on 638 stations with 8 or more years of data from undeveloped watersheds and other criteria. The general approach is explicitly limited to the use of L-moment statistics, which are used in conjunction with a technique of multi-linear regression referred to as PRESS minimization. The approach used to develop the regional equations, which was refined during the investigation, is referred to as the 'L-moment-based, PRESS-minimized, residual-adjusted approach'. For the approach, seven unique distributions are fit to the sample L-moments of the data for each of 638 stations and trimmed means of the seven results of the distributions for each recurrence interval are used to define the station specific, peak-streamflow frequency. As a first iteration of regression, nine weighted-least-squares, PRESS-minimized, multi-linear regression equations are computed using the watershed characteristics of drainage area, dimensionless main-channel slope, and mean annual precipitation. The residuals of the nine equations are spatially mapped, and residuals for the 10-year recurrence interval are selected for generalization to 1-degree latitude and longitude quadrangles. The generalized residual is referred to as the OmegaEM parameter and represents a generalized terrain and climate index that expresses peak-streamflow potential not otherwise represented in the three watershed characteristics. The OmegaEM parameter was assigned to each station, and using OmegaEM, nine additional regression equations are computed. Because of favorable diagnostics, the OmegaEM equations are expected to be generally reliable estimators of peak-streamflow frequency for undeveloped and ungaged stream locations in Texas. The mean residual standard error, adjusted R-squared, and percentage reduction of PRESS by use of OmegaEM are 0.30log10, 0.86, and -21 percent, respectively. Inclusion of the OmegaEM parameter provides a substantial reduction in the PRESS statistic of the regression equations and removes considerable spatial dependency in regression residuals. Although the OmegaEM parameter requires interpretation on the part of analysts and the potential exists that different analysts could estimate different values for a given watershed, the authors suggest that typical uncertainty in the OmegaEM estimate might be about +or-0.1010. Finally, given the two ensembles of equations reported herein and those in previous reports, hydrologic design engineers and other analysts have several different methods, which represent different analytical tracks, to make comparisons of peak-streamflow frequency estimates for ungaged watersheds in the study area.
A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.
Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan
2017-06-22
Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.
Louwerse, Max M; Benesh, Nick
2012-01-01
Spatial mental representations can be derived from linguistic and non-linguistic sources of information. This study tested whether these representations could be formed from statistical linguistic frequencies of city names, and to what extent participants differed in their performance when they estimated spatial locations from language or maps. In a computational linguistic study, we demonstrated that co-occurrences of cities in Tolkien's Lord of the Rings trilogy and The Hobbit predicted the authentic longitude and latitude of those cities in Middle Earth. In a human study, we showed that human spatial estimates of the location of cities were very similar regardless of whether participants read Tolkien's texts or memorized a map of Middle Earth. However, text-based location estimates obtained from statistical linguistic frequencies better predicted the human text-based estimates than the human map-based estimates. These findings suggest that language encodes spatial structure of cities, and that human cognitive map representations can come from implicit statistical linguistic patterns, from explicit non-linguistic perceptual information, or from both. Copyright © 2012 Cognitive Science Society, Inc.
Certainty Equivalence M-MRAC for Systems with Unmatched Uncertainties
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
The paper presents a certainty equivalence state feedback indirect adaptive control design method for the systems of any relative degree with unmatched uncertainties. The approach is based on the parameter identification (estimation) model, which is completely separated from the control design and is capable of producing parameter estimates as fast as the computing power allows without generating high frequency oscillations. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters.
The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.
Muller, A; Pontonnier, C; Dumont, G
2018-02-01
The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.
Unsteady wind loads for TMT: replacing parametric models with CFD
NASA Astrophysics Data System (ADS)
MacMartin, Douglas G.; Vogiatzis, Konstantinos
2014-08-01
Unsteady wind loads due to turbulence inside the telescope enclosure result in image jitter and higher-order image degradation due to M1 segment motion. Advances in computational fluid dynamics (CFD) allow unsteady simulations of the flow around realistic telescope geometry, in order to compute the unsteady forces due to wind turbulence. These simulations can then be used to understand the characteristics of the wind loads. Previous estimates used a parametric model based on a number of assumptions about the wind characteristics, such as a von Karman spectrum and frozen-flow turbulence across M1, and relied on CFD only to estimate parameters such as mean wind speed and turbulent kinetic energy. Using the CFD-computed forces avoids the need for assumptions regarding the flow. We discuss here both the loads on the telescope that lead to image jitter, and the spatially-varying force distribution across the primary mirror, using simulations with the Thirty Meter Telescope (TMT) geometry. The amplitude, temporal spectrum, and spatial distribution of wind disturbances are all estimated; these are then used to compute the resulting image motion and degradation. There are several key differences relative to our earlier parametric model. First, the TMT enclosure provides sufficient wind reduction at the top end (near M2) to render the larger cross-sectional structural areas further inside the enclosure (including M1) significant in determining the overall image jitter. Second, the temporal spectrum is not von Karman as the turbulence is not fully developed; this applies both in predicting image jitter and M1 segment motion. And third, for loads on M1, the spatial characteristics are not consistent with propagating a frozen-flow turbulence screen across the mirror: Frozen flow would result in a relationship between temporal frequency content and spatial frequency content that does not hold in the CFD predictions. Incorporating the new estimates of wind load characteristics into TMT response predictions leads to revised estimates of the response of TMT to wind turbulence, and validates the aerodynamic design of the enclosure.
Estimation of the vortex length scale and intensity from two-dimensional samples
NASA Technical Reports Server (NTRS)
Reuss, D. L.; Cheng, W. P.
1992-01-01
A method is proposed for estimating flow features that influence flame wrinkling in reciprocating internal combustion engines, where traditional statistical measures of turbulence are suspect. Candidate methods were tested in a computed channel flow where traditional turbulence measures are valid and performance can be rationally evaluated. Two concepts are tested. First, spatial filtering is applied to the two-dimensional velocity distribution and found to reveal structures corresponding to the vorticity field. Decreasing the spatial-frequency cutoff of the filter locally changes the character and size of the flow structures that are revealed by the filter. Second, vortex length scale and intensity is estimated by computing the ensemble-average velocity distribution conditionally sampled on the vorticity peaks. The resulting conditionally sampled 'average vortex' has a peak velocity less than half the rms velocity and a size approximately equal to the two-point-correlation integral-length scale.
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Rizzi, Egidio
2017-07-01
Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.
Psychometric functions for pure-tone frequency discrimination.
Dai, Huanping; Micheyl, Christophe
2011-07-01
The form of the psychometric function (PF) for auditory frequency discrimination is of theoretical interest and practical importance. In this study, PFs for pure-tone frequency discrimination were measured for several standard frequencies (200-8000 Hz) and levels [35-85 dB sound pressure level (SPL)] in normal-hearing listeners. The proportion-correct data were fitted using a cumulative-Gaussian function of the sensitivity index, d', computed as a power transformation of the frequency difference, Δf. The exponent of the power function corresponded to the slope of the PF on log(d')-log(Δf) coordinates. The influence of attentional lapses on PF-slope estimates was investigated. When attentional lapses were not taken into account, the estimated PF slopes on log(d')-log(Δf) coordinates were found to be significantly lower than 1, suggesting a nonlinear relationship between d' and Δf. However, when lapse rate was included as a free parameter in the fits, PF slopes were found not to differ significantly from 1, consistent with a linear relationship between d' and Δf. This was the case across the wide ranges of frequencies and levels tested in this study. Therefore, spectral and temporal models of frequency discrimination must account for a linear relationship between d' and Δf across a wide range of frequencies and levels. © 2011 Acoustical Society of America
Unified tensor model for space-frequency spreading-multiplexing (SFSM) MIMO communication systems
NASA Astrophysics Data System (ADS)
de Almeida, André LF; Favier, Gérard
2013-12-01
This paper presents a unified tensor model for space-frequency spreading-multiplexing (SFSM) multiple-input multiple-output (MIMO) wireless communication systems that combine space- and frequency-domain spreadings, followed by a space-frequency multiplexing. Spreading across space (transmit antennas) and frequency (subcarriers) adds resilience against deep channel fades and provides space and frequency diversities, while orthogonal space-frequency multiplexing enables multi-stream transmission. We adopt a tensor-based formulation for the proposed SFSM MIMO system that incorporates space, frequency, time, and code dimensions by means of the parallel factor model. The developed SFSM tensor model unifies the tensorial formulation of some existing multiple-access/multicarrier MIMO signaling schemes as special cases, while revealing interesting tradeoffs due to combined space, frequency, and time diversities which are of practical relevance for joint symbol-channel-code estimation. The performance of the proposed SFSM MIMO system using either a zero forcing receiver or a semi-blind tensor-based receiver is illustrated by means of computer simulation results under realistic channel and system parameters.
Comparaisons d'étalons primaires de fréquence par GPS.
NASA Astrophysics Data System (ADS)
Uhrich, P. J.-M.
The new primary frequency standard of the BNM-LPTF, LPTF FO1, exhibits a frequency accuracy estimated at 3×10-15. For the comparison with other primary frequency standards, it then requires a method that remains at a stability level better than 10-15 between ten hours, during which it remains generally in continuous operation, and a couple of days, where the local oscillator towards which LPTF FO1 is estimated keeps its frequency at a level of 2×10-15. The well known GPS common-view method does not fit any more when using a single channel receiver: the clock comparison measurements exhibit a frequency stability at a few parts in 10-14 over one day, depending on the distance between the clock, and the intrinsic best stability level limited by the GPS signal currently used can be calculated at 7.7×10-15. But is can be shown that a 4 channel receiver, performing as many regular common-views as possible over each day, would allow to reach 10-15 on actual measurements. That should also be the case for an other option: the use of the carrier phase of the GPS signal, associated with global geodetic computing.
Diffraction studies applicable to 60-foot microwave research facilities
NASA Technical Reports Server (NTRS)
Schmidt, R. F.
1973-01-01
The principal features of this document are the analysis of a large dual-reflector antenna system by vector Kirchhoff theory, the evaluation of subreflector aperture-blocking, determination of the diffraction and blockage effects of a subreflector mounting structure, and an estimate of strut-blockage effects. Most of the computations are for a frequency of 15.3 GHz, and were carried out using the IBM 360/91 and 360/95 systems at Goddard Space Flight Center. The FORTRAN 4 computer program used to perform the computations is of a general and modular type so that various system parameters such as frequency, eccentricity, diameter, focal-length, etc. can be varied at will. The parameters of the 60-foot NRL Ku-band installation at Waldorf, Maryland, were entered into the program for purposes of this report. Similar calculations could be performed for the NELC installation at La Posta, California, the NASA Wallops Station facility in Virginia, and other antenna systems, by a simple change in IBM control cards. A comparison is made between secondary radiation patterns of the NRL antenna measured by DOD Satellite and those obtained by analytical/numerical methods at a frequency of 7.3 GHz.
Spatial frequency spectrum of the x-ray scatter distribution in CBCT projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J.; Verhaegen, F.; Department of Oncology, Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4
2013-11-15
Purpose: X-ray scatter is a source of significant image quality loss in cone-beam computed tomography (CBCT). The use of Monte Carlo (MC) simulations separating primary and scattered photons has allowed the structure and nature of the scatter distribution in CBCT to become better elucidated. This work seeks to quantify the structure and determine a suitable basis function for the scatter distribution by examining its spectral components using Fourier analysis.Methods: The scatter distribution projection data were simulated using a CBCT MC model based on the EGSnrc code. CBCT projection data, with separated primary and scatter signal, were generated for a 30.6more » cm diameter water cylinder [single angle projection with varying axis-to-detector distance (ADD) and bowtie filters] and two anthropomorphic phantoms (head and pelvis, 360 projections sampled every 1°, with and without a compensator). The Fourier transform of the resulting scatter distributions was computed and analyzed both qualitatively and quantitatively. A novel metric called the scatter frequency width (SFW) is introduced to determine the scatter distribution's frequency content. The frequency content results are used to determine a set basis functions, consisting of low-frequency sine and cosine functions, to fit and denoise the scatter distribution generated from MC simulations using a reduced number of photons and projections. The signal recovery is implemented using Fourier filtering (low-pass Butterworth filter) and interpolation. Estimates of the scatter distribution are used to correct and reconstruct simulated projections.Results: The spatial and angular frequencies are contained within a maximum frequency of 0.1 cm{sup −1} and 7/(2π) rad{sup −1} for the imaging scenarios examined, with these values varying depending on the object and imaging setup (e.g., ADD and compensator). These data indicate spatial and angular sampling every 5 cm and π/7 rad (∼25°) can be used to properly capture the scatter distribution, with reduced sampling possible depending on the imaging scenario. Using a low-pass Butterworth filter, tuned with the SFW values, to denoise the scatter projection data generated from MC simulations using 10{sup 6} photons resulted in an error reduction of greater than 85% for the estimating scatter in single and multiple projections. Analysis showed that the use of a compensator helped reduce the error in estimating the scatter distribution from limited photon simulations by more than 37% when compared to the case without a compensator for the head and pelvis phantoms. Reconstructions of simulated head phantom projections corrected by the filtered and interpolated scatter estimates showed improvements in overall image quality.Conclusions: The spatial frequency content of the scatter distribution in CBCT is found to be contained within the low frequency domain. The frequency content is modulated both by object and imaging parameters (ADD and compensator). The low-frequency nature of the scatter distribution allows for a limited set of sine and cosine basis functions to be used to accurately represent the scatter signal in the presence of noise and reduced data sampling decreasing MC based scatter estimation time. Compensator induced modulation of the scatter distribution reduces the frequency content and improves the fitting results.« less
Computational expressions for signals in frequency-modulation spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Rosa, Michael D.; Reiten, M. T.
2015-05-25
In this study, general expressions for the signals in frequency-modulation spectroscopy (FMS) appear in the literature but are often reduced to simple analytical equations following the assumption of a weak modulation index. This is little help to the experimentalist who wants to predict signals for modulation depths of the order of unity or greater, where strong FMS signals reside. Here, we develop general formulas for FMS signals in the case of an absorber with a Voigt line shape and then link these expressions to an example and existing numerical code for the line shape. The resulting computational recipe is easymore » to implement and exercised here to show where the larger FMS signals are found over the coordinates of modulation index and modulation frequency. One can also estimate from provided curves the in-phase FMS signal over a wide range of modulation parameters at either the Lorentzian-broadening or Doppler-broadening limit, or anywhere in between by interpolation.« less
Saha, Dibakar; Alluri, Priyanka; Gan, Albert
2017-01-01
The Highway Safety Manual (HSM) presents statistical models to quantitatively estimate an agency's safety performance. The models were developed using data from only a few U.S. states. To account for the effects of the local attributes and temporal factors on crash occurrence, agencies are required to calibrate the HSM-default models for crash predictions. The manual suggests updating calibration factors every two to three years, or preferably on an annual basis. Given that the calibration process involves substantial time, effort, and resources, a comprehensive analysis of the required calibration factor update frequency is valuable to the agencies. Accordingly, the objective of this study is to evaluate the HSM's recommendation and determine the required frequency of calibration factor updates. A robust Bayesian estimation procedure is used to assess the variation between calibration factors computed annually, biennially, and triennially using data collected from over 2400 miles of segments and over 700 intersections on urban and suburban facilities in Florida. Bayesian model yields a posterior distribution of the model parameters that give credible information to infer whether the difference between calibration factors computed at specified intervals is credibly different from the null value which represents unaltered calibration factors between the comparison years or in other words, zero difference. The concept of the null value is extended to include the range of values that are practically equivalent to zero. Bayesian inference shows that calibration factors based on total crash frequency are required to be updated every two years in cases where the variations between calibration factors are not greater than 0.01. When the variations are between 0.01 and 0.05, calibration factors based on total crash frequency could be updated every three years. Copyright © 2016 Elsevier Ltd. All rights reserved.
Site response and attenuation in the Puget Lowland, Washington State
Pratt, T.L.; Brocher, T.M.
2006-01-01
Simple spectral ratio (SSR) and horizontal-to-vertical (HN) site-response estimates at 47 sites in the Puget Lowland of Washington State document significant attenuation of 1.5- to 20-Hz shear waves within sedimentary basins there. Amplitudes of the horizontal components of shear-wave arrivals from three local earthquakes were used to compute SSRs with respect to the average of two bedrock sites and H/V spectral ratios with respect to the vertical component of the shear-wave arrivals at each site. SSR site-response curves at thick basin sites show peak amplifications of 2 to 6 at frequencies of 3 to 6 Hz, and decreasing spectra amplification with increasing frequency above 6 Hz. SSRs at nonbasin sites show a variety of shapes and larger resonance peaks. We attribute the spectral decay at frequencies above the amplification peak at basin sites to attenuation within the basin strata. Computing the frequency-independent, depth-dependent attenuation factor (Qs,int) from the SSR spectral decay between 2 and 20 Hz gives values of 5 to 40 for shallow sedimentary deposits and about 250 for the deepest sedimentary strata (7 km depth). H/V site responses show less spectral decay than the SSR responses but contain many of the same resonance peaks. We hypothesize that the H/V method yields a flatter response across the frequency spectrum than SSRs because the H/V reference signal (vertical component of the shear-wave arrivals) has undergone a degree of attenuation similar to the horizontal component recordings. Correcting the SSR site responses for attenuation within the basins by removing the spectral decay improves agreement between SSR and H/V estimates.
Heshmat, Ramin; Qorbani, Mostafa; Mozaffarian, Nafiseh; Djalalinia, Shirin; Sheidaei, Ali; Motlagh, Mohammad Esmaeil; Safiri, Saeid; Gohari, Kimia; Ataie-Jafari, Asal; Ardalan, Gelayol; Asayesh, Hamid; Mansourian, Morteza; Kelishadi, Roya
2018-02-01
This study aimed to assess the socioeconomic inequality and determinants of screen time (ST) frequency in Iranian children and adolescents. This nationwide study was conducted as part of a national school-based surveillance program among 36,486 students consisting of 50.79% boys and 74.23% urban inhabitants, aged 6-18 years, living in urban and rural areas of 30 provinces of Iran. Socioeconomic inequality in ST, including the time spent for ST, watching TV and leisure-time working with computer, was assessed across quintiles of SES using concentration index (C) and slope index of inequality (SII). Overall, 36,486 students completed the study (response rate 91.25%). Their mean (SD) age was 12.14 (3.36) years. The national estimation of frequency of ST was 31.66% (95% CI 31.16-32.17) with ascending change from 20.80% (95% CI 19.81-21.82) to 36.66% (95% CI 35.47-37.87) from the first to the last quintal of SES. Estimated C value at national level was positive (0.08), which indicate inequality was in favor of low SES groups. Considering the SII values, at national level [- 0.16 (- 0.39, 0.06)], the absolute difference in ST frequency between the bottom and top of the socioeconomic groups had descending trends. In multivariate logistic regression model, family history of obesity, generalized obesity and age were the main significant determinants of prolonged ST, watching TV, and computer working (P < 0.001). Socioeconomic inequality in ST frequency was in favor of low SES groups. These findings are useful for health policies, better programming and future complementary analyses.
Veronesi, G; Maisonneuve, P; Rampinelli, C; Bertolotti, R; Petrella, F; Spaggiari, L; Bellomi, M
2013-12-01
It is unclear how long low-dose computed tomographic (LDCT) screening should continue in populations at high risk of lung cancer. We assessed outcomes and the predictive ability of the COSMOS prediction model in volunteers screened for 10 years. Smokers and former smokers (>20 pack-years), >50 years, were enrolled over one year (2000-2001), receiving annual LDCT for 10 years. The frequency of screening-detected lung cancers was compared with COSMOS and Bach risk model estimates. Among 1035 recruited volunteers (71% men, mean age 58 years) compliance was 65% at study end. Seventy-one (6.95%) lung cancers were diagnosed, 12 at baseline. Disease stage was: IA in 48 (66.6%); IB in 6; IIA in 5; IIB in 2; IIIA in 5; IIIB in 1; IV in 5; and limited small cell cancer in 3. Five- and ten-year survival were 64% and 57%, respectively, 84% and 65% for stage I. Ten (12.1%) received surgery for a benign lesion. The number of lung cancers detected during the first two screening rounds was close to that predicted by the COSMOS model, while the Bach model accurately predicted frequency from the third year on. Neither cancer frequency nor proportion at stage I decreased over 10 years, indicating that screening should not be discontinued. Most cancers were early stage, and overall survival was high. Only a limited number of invasive procedures for benign disease were performed. The Bach model - designed to predict symptomatic cancers - accurately predicted cancer frequency from the third year, suggesting that overdiagnosis is a minor problem in lung cancer screening. The COSMOS model - designed to estimate screening-detected lung cancers - accurately predicted cancer frequency at baseline and second screening round. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Salin, M. B.; Dosaev, A. S.; Konkov, A. I.; Salin, B. M.
2014-07-01
Numerical simulation methods are described for the spectral characteristics of an acoustic signal scattered by multiscale surface waves. The methods include the algorithms for calculating the scattered field by the Kirchhoff method and with the use of an integral equation, as well as the algorithms of surface waves generation with allowance for nonlinear hydrodynamic effects. The paper focuses on studying the spectrum of Bragg scattering caused by surface waves whose frequency exceeds the fundamental low-frequency component of the surface waves by several octaves. The spectrum broadening of the backscattered signal is estimated. The possibility of extending the range of applicability of the computing method developed under small perturbation conditions to cases characterized by a Rayleigh parameter of ≥1 is estimated.
A comparison of three approaches to non-stationary flood frequency analysis
NASA Astrophysics Data System (ADS)
Debele, S. E.; Strupczewski, W. G.; Bogdanowicz, E.
2017-08-01
Non-stationary flood frequency analysis (FFA) is applied to statistical analysis of seasonal flow maxima from Polish and Norwegian catchments. Three non-stationary estimation methods, namely, maximum likelihood (ML), two stage (WLS/TS) and GAMLSS (generalized additive model for location, scale and shape parameters), are compared in the context of capturing the effect of non-stationarity on the estimation of time-dependent moments and design quantiles. The use of a multimodel approach is recommended, to reduce the errors due to the model misspecification in the magnitude of quantiles. The results of calculations based on observed seasonal daily flow maxima and computer simulation experiments showed that GAMLSS gave the best results with respect to the relative bias and root mean square error in the estimates of trend in the standard deviation and the constant shape parameter, while WLS/TS provided better accuracy in the estimates of trend in the mean value. Within three compared methods the WLS/TS method is recommended to deal with non-stationarity in short time series. Some practical aspects of the GAMLSS package application are also presented. The detailed discussion of general issues related to consequences of climate change in the FFA is presented in the second part of the article entitled "Around and about an application of the GAMLSS package in non-stationary flood frequency analysis".
Schultz-Coulon, H J
1975-07-01
The applicability of a newly developed fundamental frequency analyzer to diagnosis in phoniatrics is reviewed. During routine voice examination, the analyzer allows a quick and accurate measurement of fundamental frequency and sound level of the speaking voice, and of vocal range and maximum phonation time. By computing fundamental frequency histograms, the median fundamental frequency and the total pitch range can be better determined and compared. Objective studies of certain technical faculties of the singing voice, which usually are estimated subjectively by the speech therapist, may now be done by means of this analyzer. Several examples demonstrate the differences between correct and incorrect phonation. These studies compare the pitch perturbations during the crescendo and decrescendo of a swell-tone, and show typical traces of staccato, thrill and yodel. Conclusions of the study indicate that fundamental frequency analysis is a valuable supplemental method for objective voice examination.
ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition.
Koslicki, David; Chatterjee, Saikat; Shahrivar, Damon; Walker, Alan W; Francis, Suzanna C; Fraser, Louise J; Vehkaperä, Mikko; Lan, Yueheng; Corander, Jukka
2015-01-01
Estimation of bacterial community composition from high-throughput sequenced 16S rRNA gene amplicons is a key task in microbial ecology. Since the sequence data from each sample typically consist of a large number of reads and are adversely impacted by different levels of biological and technical noise, accurate analysis of such large datasets is challenging. There has been a recent surge of interest in using compressed sensing inspired and convex-optimization based methods to solve the estimation problem for bacterial community composition. These methods typically rely on summarizing the sequence data by frequencies of low-order k-mers and matching this information statistically with a taxonomically structured database. Here we show that the accuracy of the resulting community composition estimates can be substantially improved by aggregating the reads from a sample with an unsupervised machine learning approach prior to the estimation phase. The aggregation of reads is a pre-processing approach where we use a standard K-means clustering algorithm that partitions a large set of reads into subsets with reasonable computational cost to provide several vectors of first order statistics instead of only single statistical summarization in terms of k-mer frequencies. The output of the clustering is then processed further to obtain the final estimate for each sample. The resulting method is called Aggregation of Reads by K-means (ARK), and it is based on a statistical argument via mixture density formulation. ARK is found to improve the fidelity and robustness of several recently introduced methods, with only a modest increase in computational complexity. An open source, platform-independent implementation of the method in the Julia programming language is freely available at https://github.com/dkoslicki/ARK. A Matlab implementation is available at http://www.ee.kth.se/ctsoftware.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-09
... responses. Estimated Time Per Response: 20 minutes (.3 hours). Frequency of Response: Recordkeeping...), Transmitter Power Standards. No station may transmit with an effective radiated power (ERP) exceeding 50 W PEP on the 60 m band. For the purpose of computing ERP, the transmitter PEP will be multiplied by the...
Chen, Shigao; Fatemi, Mostafa; Greenleaf, James F
2002-09-01
A quantitative model is presented for a sphere vibrated by two ultrasound beams of frequency omega1 and omega2. Due to the interference of two sound beams, the radiation force has a dynamic component of frequency omega2-omega1. The radiation impedance and mechanical impedance of the sphere are then used to compute the vibration speed of the sphere. Vibration speed versus vibration frequency is measured by laser vibrometer on several spheres, both in water and in gel phantom. These experimental results are used to verify the model. This method can be used to estimate the material properties of the medium (e.g., shear modulus) surrounding the sphere.
Estimating sediment discharge: Appendix D
Gray, John R.; Simões, Francisco J. M.
2008-01-01
Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with different types of bed-load samplers may not be comparable (Gray et al. 1991; Childers 1999; Edwards and Glysson 1999). The total suspended solids (TSS) analytical method tends to produce concentration data from open-channel flows that are biased low with respect to their paired suspended-sediment concentration values, particularly when sand-size material composes more than about a quarter of the material in suspension. Instantaneous sediment-discharge values based on TSS data may differ from the more reliable product of suspended- sediment concentration values and the same water-discharge data by an order of magnitude (Gray et al. 2000; Bent et al. 2001; Glysson et al. 2000; 2001). An assessment of data comparability and reliability is an important first step in the estimation of sediment discharges. There are two approaches to obtaining values describing sediment loads in streams. One is based on direct measurement of the quantities of interest, and the other on relations developed between hydraulic parameters and sediment- transport potential. In the next sections, the most common techniques for both approaches are briefly addressed.
Truck acoustic data analyzer system
Haynes, Howard D.; Akerman, Alfred; Ayers, Curtis W.
2006-07-04
A passive vehicle acoustic data analyzer system having at least one microphone disposed in the acoustic field of a moving vehicle and a computer in electronic communication the microphone(s). The computer detects and measures the frequency shift in the acoustic signature emitted by the vehicle as it approaches and passes the microphone(s). The acoustic signature of a truck driving by a microphone can provide enough information to estimate the truck speed in miles-per-hour (mph), engine speed in rotations-per-minute (RPM), turbocharger speed in RPM, and vehicle weight.
Bayesian Non-Stationary Index Gauge Modeling of Gridded Precipitation Extremes
NASA Astrophysics Data System (ADS)
Verdin, A.; Bracken, C.; Caldwell, J.; Balaji, R.; Funk, C. C.
2017-12-01
We propose a Bayesian non-stationary model to generate watershed scale gridded estimates of extreme precipitation return levels. The Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset is used to obtain gridded seasonal precipitation extremes over the Taylor Park watershed in Colorado for the period 1981-2016. For each year, grid cells within the Taylor Park watershed are aggregated to a representative "index gauge," which is input to the model. Precipitation-frequency curves for the index gauge are estimated for each year, using climate variables with significant teleconnections as proxies. Such proxies enable short-term forecasting of extremes for the upcoming season. Disaggregation ratios of the index gauge to the grid cells within the watershed are computed for each year and preserved to translate the index gauge precipitation-frequency curve to gridded precipitation-frequency maps for select return periods. Gridded precipitation-frequency maps are of the same spatial resolution as CHIRPS (0.05° x 0.05°). We verify that the disaggregation method preserves spatial coherency of extremes in the Taylor Park watershed. Validation of the index gauge extreme precipitation-frequency method consists of ensuring extreme value statistics are preserved on a grid cell basis. To this end, a non-stationary extreme precipitation-frequency analysis is performed on each grid cell individually, and the resulting frequency curves are compared to those produced by the index gauge disaggregation method.
A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers
NASA Astrophysics Data System (ADS)
Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair
We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.
Lesion contrast and detection using sonoelastographic shear velocity imaging: preliminary results
NASA Astrophysics Data System (ADS)
Hoyt, Kenneth; Parker, Kevin J.
2007-03-01
This paper assesses lesion contrast and detection using sonoelastographic shear velocity imaging. Shear wave interference patterns, termed crawling waves, for a two phase medium were simulated assuming plane wave conditions. Shear velocity estimates were computed using a spatial autocorrelation algorithm that operates in the direction of shear wave propagation for a given kernel size. Contrast was determined by analyzing shear velocity estimate transition between mediums. Experimental results were obtained using heterogeneous phantoms with spherical inclusions (5 or 10 mm in diameter) characterized by elevated shear velocities. Two vibration sources were applied to opposing phantom edges and scanned (orthogonal to shear wave propagation) with an ultrasound scanner equipped for sonoelastography. Demodulated data was saved and transferred to an external computer for processing shear velocity images. Simulation results demonstrate shear velocity transition between contrasting mediums is governed by both estimator kernel size and source vibration frequency. Experimental results from phantoms further indicates that decreasing estimator kernel size produces corresponding decrease in shear velocity estimate transition between background and inclusion material albeit with an increase in estimator noise. Overall, results demonstrate the ability to generate high contrast shear velocity images using sonoelastographic techniques and detect millimeter-sized lesions.
Babiloni, F; Babiloni, C; Carducci, F; Fattorini, L; Onorati, P; Urbano, A
1996-04-01
This paper presents a realistic Laplacian (RL) estimator based on a tensorial formulation of the surface Laplacian (SL) that uses the 2-D thin plate spline function to obtain a mathematical description of a realistic scalp surface. Because of this tensorial formulation, the RL does not need an orthogonal reference frame placed on the realistic scalp surface. In simulation experiments the RL was estimated with an increasing number of "electrodes" (up to 256) on a mathematical scalp model, the analytic Laplacian being used as a reference. Second and third order spherical spline Laplacian estimates were examined for comparison. Noise of increasing magnitude and spatial frequency was added to the simulated potential distributions. Movement-related potentials and somatosensory evoked potentials sampled with 128 electrodes were used to estimate the RL on a realistically shaped, MR-constructed model of the subject's scalp surface. The RL was also estimated on a mathematical spherical scalp model computed from the real scalp surface. Simulation experiments showed that the performances of the RL estimator were similar to those of the second and third order spherical spline Laplacians. Furthermore, the information content of scalp-recorded potentials was clearly better when the RL estimator computed the SL of the potential on an MR-constructed scalp surface model.
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Two-voice fundamental frequency estimation
NASA Astrophysics Data System (ADS)
de Cheveigné, Alain
2002-05-01
An algorithm is presented that estimates the fundamental frequencies of two concurrent voices or instruments. The algorithm models each voice as a periodic function of time, and jointly estimates both periods by cancellation according to a previously proposed method [de Cheveigné and Kawahara, Speech Commun. 27, 175-185 (1999)]. The new algorithm improves on the old in several respects; it allows an unrestricted search range, effectively avoids harmonic and subharmonic errors, is more accurate (it uses two-dimensional parabolic interpolation), and is computationally less costly. It remains subject to unavoidable errors when periods are in certain simple ratios and the task is inherently ambiguous. The algorithm is evaluated on a small database including speech, singing voice, and instrumental sounds. It can be extended in several ways; to decide the number of voices, to handle amplitude variations, and to estimate more than two voices (at the expense of increased processing cost and decreased reliability). It makes no use of instrument models, learned or otherwise, although it could usefully be combined with such models. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.
NASA Astrophysics Data System (ADS)
Tau Siesakul, Bamrung; Gkoktsi, Kyriaki; Giaralis, Agathoklis
2015-05-01
Motivated by the need to reduce monetary and energy consumption costs of wireless sensor networks in undertaking output-only/operational modal analysis of engineering structures, this paper considers a multi-coset analog-toinformation converter for structural system identification from acceleration response signals of white noise excited linear damped structures sampled at sub-Nyquist rates. The underlying natural frequencies, peak gains in the frequency domain, and critical damping ratios of the vibrating structures are estimated directly from the sub-Nyquist measurements and, therefore, the computationally demanding signal reconstruction step is by-passed. This is accomplished by first employing a power spectrum blind sampling (PSBS) technique for multi-band wide sense stationary stochastic processes in conjunction with deterministic non-uniform multi-coset sampling patterns derived from solving a weighted least square optimization problem. Next, modal properties are derived by the standard frequency domain peak picking algorithm. Special attention is focused on assessing the potential of the adopted PSBS technique, which poses no sparsity requirements to the sensed signals, to derive accurate estimates of modal structural system properties from noisy sub- Nyquist measurements. To this aim, sub-Nyquist sampled acceleration response signals corrupted by various levels of additive white noise pertaining to a benchmark space truss structure with closely spaced natural frequencies are obtained within an efficient Monte Carlo simulation-based framework. Accurate estimates of natural frequencies and reasonable estimates of local peak spectral ordinates and critical damping ratios are derived from measurements sampled at about 70% below the Nyquist rate and for SNR as low as 0db demonstrating that the adopted approach enjoys noise immunity.
Method and System for Temporal Filtering in Video Compression Systems
NASA Technical Reports Server (NTRS)
Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim
2011-01-01
Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.
The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian
This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less
NASA Astrophysics Data System (ADS)
Fulani, Olatunji T.
Development of electric drive systems for transportation and industrial applications is rapidly seeing the use of wide-bandgap (WBG) based power semiconductor devices. These devices, such as SiC MOSFETs, enable high switching frequencies and are becoming the preferred choice in inverters because of their lower switching losses and higher allowable operating temperatures. Due to the much shorter turn-on and turn-off times and correspondingly larger output voltage edge rates, traditional models and methods previously used to estimate inverter and motor power losses, based upon a triangular power loss waveform, are no longer justifiable from a physical perspective. In this thesis, more appropriate models and a power loss calculation approach are described with the goal of more accurately estimating the power losses in WBG-based electric drive systems. Sine-triangle modulation with third harmonic injection is used to control the switching of the inverter. The motor and inverter models are implemented using Simulink and computer studies are shown illustrating the application of the new approach.
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
Wave front sensing for next generation earth observation telescope
NASA Astrophysics Data System (ADS)
Delvit, J.-M.; Thiebaut, C.; Latry, C.; Blanchet, G.
2017-09-01
High resolution observations systems are highly dependent on optics quality and are usually designed to be nearly diffraction limited. Such a performance allows to set a Nyquist frequency closer to the cut off frequency, or equivalently to minimize the pupil diameter for a given ground sampling distance target. Up to now, defocus is the only aberration that is allowed to evolve slowly and that may be inflight corrected, using an open loop correction based upon ground estimation and refocusing command upload. For instance, Pleiades satellites defocus is assessed from star acquisitions and refocusing is done with a thermal actuation of the M2 mirror. Next generation systems under study at CNES should include active optics in order to allow evolving aberrations not only limited to defocus, due for instance to in orbit thermal variable conditions. Active optics relies on aberration estimations through an onboard Wave Front Sensor (WFS). One option is using a Shack Hartmann. The Shack-Hartmann wave-front sensor could be used on extended scenes (unknown landscapes). A wave-front computation algorithm should then be implemented on-board the satellite to provide the control loop wave-front error measure. In the worst case scenario, this measure should be computed before each image acquisition. A robust and fast shift estimation algorithm between Shack-Hartmann images is then needed to fulfill this last requirement. A fast gradient-based algorithm using optical flows with a Lucas-Kanade method has been studied and implemented on an electronic device developed by CNES. Measurement accuracy depends on the Wave Front Error (WFE), the landscape frequency content, the number of searched aberrations, the a priori knowledge of high order aberrations and the characteristics of the sensor. CNES has realized a full scale sensitivity analysis on the whole parameter set with our internally developed algorithm.
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-07
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
NASA Astrophysics Data System (ADS)
Chen, Xi Lin; De Santis, Valerio; Esai Umenei, Aghuinyue
2014-07-01
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
Bayesian sparse channel estimation
NASA Astrophysics Data System (ADS)
Chen, Chulong; Zoltowski, Michael D.
2012-05-01
In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.
Measuring sperm movement within the female reproductive tract using Fourier analysis.
Nicovich, Philip R; Macartney, Erin L; Whan, Renee M; Crean, Angela J
2015-02-01
The adaptive significance of variation in sperm phenotype is still largely unknown, in part due to the difficulties of observing and measuring sperm movement in its natural, selective environment (i.e., within the female reproductive tract). Computer-assisted sperm analysis systems allow objective and accurate measurement of sperm velocity, but rely on being able to track individual sperm, and are therefore unable to measure sperm movement in species where sperm move in trains or bundles. Here we describe a newly developed computational method for measuring sperm movement using Fourier analysis to estimate sperm tail beat frequency. High-speed time-lapse videos of sperm movement within the female tract of the neriid fly Telostylinus angusticollis were recorded, and a map of beat frequencies generated by converting the periodic signal of an intensity versus time trace at each pixel to the frequency domain using the Fourier transform. We were able to detect small decreases in sperm tail beat frequency over time, indicating the method is sensitive enough to identify consistent differences in sperm movement. Fourier analysis can be applied to a wide range of species and contexts, and should therefore facilitate novel exploration of the causes and consequences of variation in sperm movement.
NASA Astrophysics Data System (ADS)
Bellan, Diego; Pignari, Sergio A.
2016-07-01
This work deals with the statistical characterization of real-time digital measurement of the amplitude of harmonics affected by frequency instability. In fact, in modern power systems both the presence of harmonics and frequency instability are well-known and widespread phenomena mainly due to nonlinear loads and distributed generation, respectively. As a result, real-time monitoring of voltage/current frequency spectra is of paramount importance as far as power quality issues are addressed. Within this framework, a key point is that in many cases real-time continuous monitoring prevents the application of sophisticated algorithms to extract all the information from the digitized waveforms because of the required computational burden. In those cases only simple evaluations such as peak search of discrete Fourier transform are implemented. It is well known, however, that a slight change in waveform frequency results in lack of sampling synchronism and uncertainty in amplitude estimate. Of course the impact of this phenomenon increases with the order of the harmonic to be measured. In this paper an approximate analytical approach is proposed in order to describe the statistical properties of the measured magnitude of harmonics affected by frequency instability. By providing a simplified description of the frequency behavior of the windows used against spectral leakage, analytical expressions for mean value, variance, cumulative distribution function, and probability density function of the measured harmonics magnitude are derived in closed form as functions of waveform frequency treated as a random variable.
An Estimate of the North Atlantic Basin Tropical Cyclone Activity for the 2011 Hurricane Season
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2011-01-01
Estimates are presented for the expected level of tropical cyclone activity for the 2011 North Atlantic Basin hurricane season. It is anticipated that the frequency of tropical cyclones for the North Atlantic Basin during the 2011 hurricane season will be near to above the post-1995 means. Based on the Poisson distribution of tropical cyclone frequencies for the current more active interval 1995-2010, one computes P(r) = 63.7% for the expected frequency of the number of tropical cyclones during the 2011 hurricane season to be 14 plus or minus 3; P(r) = 62.4% for the expected frequency of the number of hurricanes to be 8 plus or minus 2; P(r) = 79.3% for the expected frequency of the number of major hurricanes to be 3 plus or minus 2; and P(r) = 72.5% for the expected frequency of the number of strikes by a hurricane along the coastline of the United States to be 1 plus or minus 1. Because El Nino is not expected to recur during the 2011 hurricane season, clearly, the possibility exists that these seasonal frequencies could easily be exceeded. Also examined are the effects of the El Nino-Southern Oscillation phase and climatic change (global warming) on tropical cyclone seasonal frequencies, the variation of the seasonal centroid (latitude and longitude) location of tropical cyclone onsets, and the variation of the seasonal peak wind speed and lowest pressure for tropical cyclones.
Characterization of Meta-Materials Using Computational Electromagnetic Methods
NASA Technical Reports Server (NTRS)
Deshpande, Manohar; Shin, Joon
2005-01-01
An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.
Weck, Philippe F; Kim, Eunja
2014-12-07
The structure of dehydrated schoepite, α-UO2(OH)2, was investigated using computational approaches that go beyond standard density functional theory and include van der Waals dispersion corrections (DFT-D). Thermal properties of α-UO2(OH)2, were also obtained from phonon frequencies calculated with density functional perturbation theory (DFPT) including van der Waals dispersion corrections. While the isobaric heat capacity computed from first-principles reproduces available calorimetric data to within 5% up to 500 K, some entropy estimates based on calorimetric measurements for UO3·0.85H2O were found to overestimate by up to 23% the values computed in this study.
An 'unconditional-like' structure for the conditional estimator of odds ratio from 2 x 2 tables.
Hanley, James A; Miettinen, Olli S
2006-02-01
In the estimation of the odds ratio (OR), the conditional maximum-likelihood estimate (cMLE) is preferred to the more readily computed unconditional one (uMLE). However, the exact cMLE does not have a closed form to help divine it from the uMLE or to understand in what circumstances the difference between the two is appreciable. Here, the cMLE is shown to have the same 'ratio of cross-products' structure as its unconditional counterpart, but with two of the cell frequencies augmented, so as to shrink the unconditional estimator towards unity. The augmentation involves a factor, similar to the finite population correction, derived from the minimum of the marginal totals.
Peak-flow frequency relations and evaluation of the peak-flow gaging network in Nebraska
Soenksen, Philip J.; Miller, Lisa D.; Sharpe, Jennifer B.; Watton, Jason R.
1999-01-01
Estimates of peak-flow magnitude and frequency are required for the efficient design of structures that convey flood flows or occupy floodways, such as bridges, culverts, and roads. The U.S. Geological Survey, in cooperation with the Nebraska Department of Roads, conducted a study to update peak-flow frequency analyses for selected streamflow-gaging stations, develop a new set of peak-flow frequency relations for ungaged streams, and evaluate the peak-flow gaging-station network for Nebraska. Data from stations located in or within about 50 miles of Nebraska were analyzed using guidelines of the Interagency Advisory Committee on Water Data in Bulletin 17B. New generalized skew relations were developed for use in frequency analyses of unregulated streams. Thirty-three drainage-basin characteristics related to morphology, soils, and precipitation were quantified using a geographic information system, related computer programs, and digital spatial data.For unregulated streams, eight sets of regional regression equations relating drainage-basin to peak-flow characteristics were developed for seven regions of the state using a generalized least squares procedure. Two sets of regional peak-flow frequency equations were developed for basins with average soil permeability greater than 4 inches per hour, and six sets of equations were developed for specific geographic areas, usually based on drainage-basin boundaries. Standard errors of estimate for the 100-year frequency equations (1percent probability) ranged from 12.1 to 63.8 percent. For regulated reaches of nine streams, graphs of peak flow for standard frequencies and distance upstream of the mouth were estimated.The regional networks of streamflow-gaging stations on unregulated streams were analyzed to evaluate how additional data might affect the average sampling errors of the newly developed peak-flow equations for the 100-year frequency occurrence. Results indicated that data from new stations, rather than more data from existing stations, probably would produce the greatest reduction in average sampling errors of the equations.
NASA Astrophysics Data System (ADS)
Kim, R. S.; Durand, M. T.; Li, D.; Baldo, E.; Margulis, S. A.; Dumont, M.; Morin, S.
2017-12-01
This paper presents a newly-proposed snow depth retrieval approach for mountainous deep snow using airborne multifrequency passive microwave (PM) radiance observation. In contrast to previous snow depth estimations using satellite PM radiance assimilation, the newly-proposed method utilized single flight observation and deployed the snow hydrologic models. This method is promising since the satellite-based retrieval methods have difficulties to estimate snow depth due to their coarse resolution and computational effort. Indeed, this approach consists of particle filter using combinations of multiple PM frequencies and multi-layer snow physical model (i.e., Crocus) to resolve melt-refreeze crusts. The method was performed over NASA Cold Land Processes Experiment (CLPX) area in Colorado during 2002 and 2003. Results showed that there was a significant improvement over the prior snow depth estimates and the capability to reduce the prior snow depth biases. When applying our snow depth retrieval algorithm using a combination of four PM frequencies (10.7,18.7, 37.0 and 89.0 GHz), the RMSE values were reduced by 48 % at the snow depth transects sites where forest density was less than 5% despite deep snow conditions. This method displayed a sensitivity to different combinations of frequencies, model stratigraphy (i.e. different number of layering scheme for snow physical model) and estimation methods (particle filter and Kalman filter). The prior RMSE values at the forest-covered areas were reduced by 37 - 42 % even in the presence of forest cover.
The effect of different methods to compute N on estimates of mixing in stratified flows
NASA Astrophysics Data System (ADS)
Fringer, Oliver; Arthur, Robert; Venayagamoorthy, Subhas; Koseff, Jeffrey
2017-11-01
The background stratification is typically well defined in idealized numerical models of stratified flows, although it is more difficult to define in observations. This may have important ramifications for estimates of mixing which rely on knowledge of the background stratification against which turbulence must work to mix the density field. Using direct numerical simulation data of breaking internal waves on slopes, we demonstrate a discrepancy in ocean mixing estimates depending on the method in which the background stratification is computed. Two common methods are employed to calculate the buoyancy frequency N, namely a three-dimensionally resorted density field (often used in numerical models) and a locally-resorted vertical density profile (often used in the field). We show that how N is calculated has a significant effect on the flux Richardson number Rf, which is often used to parameterize turbulent mixing, and the turbulence activity number Gi, which leads to errors when estimating the mixing efficiency using Gi-based parameterizations. Supported by ONR Grant N00014-08-1-0904 and LLNL Contract DE-AC52-07NA27344.
Measuring rainwater content by radar using propagation differential phase shift
NASA Technical Reports Server (NTRS)
Jameson, A. R.
1994-01-01
While radars measure several quantities closely coupled to the rainfall rate, for frequencies less than 15 GHz, estimates of the rainwater content W are traditionally computed from the radar reflectivity factor Z or the rate of attenuation A--quantities only weakly related to W. Consequently, instantaneous point estimates of W using Z and A are often erroneous. A more natural, alternative parameter for estimating W at these frequencies is the specific polarization propagation differential phase shift phi(sub DP), which is a measure of the change in the difference between phases of vertically (V) and horizontally (H) polarized waves with increasing distance from a radar. It is now well known that W is nearly linearly related to phi(sub DP) divided by (1 - reversed R), where reversed R is the mass-weighted mean axis ratio of the raindrops. Unfortunately, such relations are not widely used in part because measurements of phi(sub DP) are scarce but also because one must determine reversed R. In this work it is shown that this parameter can be estimated using the differential reflectivity (Z(sub H)/Z(sub V) at 3 GHz. An alternative technique is suggested for higher frequencies when the differential reflectivity becomes degraded by attenuation. While theory indicates that it should be possible using phi(sub DP) to estimate W quite accurately, measurement errors increase the uncertainty to +/- 18%-35% depending on reversed R. While far from ideal, it appears that these estimates are likely to be considerably more accurate than those deduced using currently available methods.
NASA Astrophysics Data System (ADS)
Ling, Jun
Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes is verified by both computer simulations and experimental results obtained by analyzing the measurements acquired in multiple in-water experiments.
Smeared spectrum jamming suppression based on generalized S transform and threshold segmentation
NASA Astrophysics Data System (ADS)
Li, Xin; Wang, Chunyang; Tan, Ming; Fu, Xiaolong
2018-04-01
Smeared Spectrum (SMSP) jamming is an effective jamming in countering linear frequency modulation (LFM) radar. According to the time-frequency distribution difference between jamming and echo, a jamming suppression method based on Generalized S transform (GST) and threshold segmentation is proposed. The sub-pulse period is firstly estimated based on auto correlation function firstly. Secondly, the time-frequency image and the related gray scale image are achieved based on GST. Finally, the Tsallis cross entropy is utilized to compute the optimized segmentation threshold, and then the jamming suppression filter is constructed based on the threshold. The simulation results show that the proposed method is of good performance in the suppression of false targets produced by SMSP.
NASA Astrophysics Data System (ADS)
Chen, Shigao; Fatemi, Mostafa; Greenleaf, James F.
2002-09-01
A quantitative model is presented for a sphere vibrated by two ultrasound beams of frequency omega1 and omega2. Due to the interference of two sound beams, the radiation force has a dynamic component of frequency omega]2-[omega1. The radiation impedance and mechanical impedance of the sphere are then used to compute the vibration speed of the sphere. Vibration speed versus vibration frequency is measured by laser vibrometer on several spheres, both in water and in gel phantom. These experimental results are used to verify the model. This method can be used to estimate the material properties of the medium (e.g., shear modulus) surrounding the sphere. copyright 2002 Acoustical Society of America.
Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.
2016-09-19
A statewide study was led to develop regression equations for estimating three selected spring and three selected fall low-flow frequency statistics for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include spring (April through June) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and fall (October through December) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years. Estimates of the three selected spring statistics are provided for 241 U.S. Geological Survey continuous-record streamgages, and estimates of the three selected fall statistics are provided for 238 of these streamgages, using data through June 2014. Because only 9 years of fall streamflow record were available, three streamgages included in the development of the spring regression equations were not included in the development of the fall regression equations. Because of regulation, diversion, or urbanization, 30 of the 241 streamgages were not included in the development of the regression equations. The study area includes Iowa and adjacent areas within 50 miles of the Iowa border. Because trend analyses indicated statistically significant positive trends when considering the period of record for most of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. Geographic information system software was used to measure 63 selected basin characteristics for each of the 211streamgages used to develop the regional regression equations. The study area was divided into three low-flow regions that were defined in a previous study for the development of regional regression equations.Because several streamgages included in the development of regional regression equations have estimates of zero flow calculated from observed streamflow for selected spring and fall low-flow frequency statistics, the final equations for the three low-flow regions were developed using two types of regression analyses—left-censored and generalized-least-squares regression analyses. A total of 211 streamgages were included in the development of nine spring regression equations—three equations for each of the three low-flow regions. A total of 208 streamgages were included in the development of nine fall regression equations—three equations for each of the three low-flow regions. A censoring threshold was used to develop 15 left-censored regression equations to estimate the three fall low-flow frequency statistics for each of the three low-flow regions and to estimate the three spring low-flow frequency statistics for the southern and northwest regions. For the northeast region, generalized-least-squares regression was used to develop three equations to estimate the three spring low-flow frequency statistics. For the northeast region, average standard errors of prediction range from 32.4 to 48.4 percent for the spring equations and average standard errors of estimate range from 56.4 to 73.8 percent for the fall equations. For the northwest region, average standard errors of estimate range from 58.9 to 62.1 percent for the spring equations and from 83.2 to 109.4 percent for the fall equations. For the southern region, average standard errors of estimate range from 43.2 to 64.0 percent for the spring equations and from 78.1 to 78.7 percent for the fall equations.The regression equations are applicable only to stream sites in Iowa with low flows not substantially affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. The regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system application. StreamStats allows users to click on any ungaged stream site and compute estimates of the six selected spring and fall low-flow statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged site are provided. StreamStats also allows users to click on any Iowa streamgage to obtain computed estimates for the six selected spring and fall low-flow statistics.
NASA Astrophysics Data System (ADS)
Kinefuchi, K.; Funaki, I.; Shimada, T.; Abe, T.
2012-10-01
Under certain conditions during rocket flights, ionized exhaust plumes from solid rocket motors may interfere with radio frequency transmissions. To understand the relevant physical processes involved in this phenomenon and establish a prediction process for in-flight attenuation levels, we attempted to measure microwave attenuation caused by rocket exhaust plumes in a sea-level static firing test for a full-scale solid propellant rocket motor. The microwave attenuation level was calculated by a coupling simulation of the inviscid-frozen-flow computational fluid dynamics of an exhaust plume and detailed analysis of microwave transmissions by applying a frequency-dependent finite-difference time-domain method with the Drude dispersion model. The calculated microwave attenuation level agreed well with the experimental results, except in the case of interference downstream the Mach disk in the exhaust plume. It was concluded that the coupling estimation method based on the physics of the frozen plasma flow with Drude dispersion would be suitable for actual flight conditions, although the mixing and afterburning in the plume should be considered depending on the flow condition.
NASA Technical Reports Server (NTRS)
Huang, Xinchuan; Taylor, Peter R.; Lee, Timothy J.
2011-01-01
High levels of theory have been used to compute quartic force fields (QFFs) for the cyclic and linear forms of the C H + molecular cation, referred to as c-C H + and I-C H +. Specifically the 33 3333 singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations, CCSD(T), has been used in conjunction with extrapolation to the one-particle basis set limit and corrections for scalar relativity and core correlation have been included. The QFFs have been used to compute highly accurate fundamental vibrational frequencies and other spectroscopic constants using both vibrational 2nd-order perturbation theory and variational methods to solve the nuclear Schroedinger equation. Agreement between our best computed fundamental vibrational frequencies and recent infrared photodissociation experiments is reasonable for most bands, but there are a few exceptions. Possible sources for the discrepancies are discussed. We determine the energy difference between the cyclic and linear forms of C H +, 33 obtaining 27.9 kcal/mol at 0 K, which should be the most reliable available. It is expected that the fundamental vibrational frequencies and spectroscopic constants presented here for c-C H + 33 and I-C H + are the most reliable available for the free gas-phase species and it is hoped that 33 these will be useful in the assignment of future high-resolution laboratory experiments or astronomical observations.
The effect of beat frequency on eye movements during free viewing.
Maróti, Emese; Knakker, Balázs; Vidnyánszky, Zoltán; Weiss, Béla
2017-02-01
External periodic stimuli entrain brain oscillations and affect perception and attention. It has been shown that background music can change oculomotor behavior and facilitate detection of visual objects occurring on the musical beat. However, whether musical beats in different tempi modulate information sampling differently during natural viewing remains to be explored. Here we addressed this question by investigating how listening to naturalistic drum grooves in two different tempi affects eye movements of participants viewing natural scenes on a computer screen. We found that the beat frequency of the drum grooves modulated the rate of eye movements: fixation durations were increased at the lower beat frequency (1.7Hz) as compared to the higher beat frequency (2.4Hz) and no music conditions. Correspondingly, estimated visual sampling frequency decreased as fixation durations increased with lower beat frequency. These results imply that slow musical beats can retard sampling of visual information during natural viewing by increasing fixation durations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Watson, Kara M.; McHugh, Amy R.
2014-01-01
Regional regression equations were developed for estimating monthly flow-duration and monthly low-flow frequency statistics for ungaged streams in Coastal Plain and non-coastal regions of New Jersey for baseline and current land- and water-use conditions. The equations were developed to estimate 87 different streamflow statistics, which include the monthly 99-, 90-, 85-, 75-, 50-, and 25-percentile flow-durations of the minimum 1-day daily flow; the August–September 99-, 90-, and 75-percentile minimum 1-day daily flow; and the monthly 7-day, 10-year (M7D10Y) low-flow frequency. These 87 streamflow statistics were computed for 41 continuous-record streamflow-gaging stations (streamgages) with 20 or more years of record and 167 low-flow partial-record stations in New Jersey with 10 or more streamflow measurements. The regression analyses used to develop equations to estimate selected streamflow statistics were performed by testing the relation between flow-duration statistics and low-flow frequency statistics for 32 basin characteristics (physical characteristics, land use, surficial geology, and climate) at the 41 streamgages and 167 low-flow partial-record stations. The regression analyses determined drainage area, soil permeability, average April precipitation, average June precipitation, and percent storage (water bodies and wetlands) were the significant explanatory variables for estimating the selected flow-duration and low-flow frequency statistics. Streamflow estimates were computed for two land- and water-use conditions in New Jersey—land- and water-use during the baseline period of record (defined as the years a streamgage had little to no change in development and water use) and current land- and water-use conditions (1989–2008)—for each selected station using data collected through water year 2008. The baseline period of record is representative of a period when the basin was unaffected by change in development. The current period is representative of the increased development of the last 20 years (1989–2008). The two different land- and water-use conditions were used as surrogates for development to determine whether there have been changes in low-flow statistics as a result of changes in development over time. The State was divided into two low-flow regression regions, the Coastal Plain and the non-coastal region, in order to improve the accuracy of the regression equations. The left-censored parametric survival regression method was used for the analyses to account for streamgages and partial-record stations that had zero flow values for some of the statistics. The average standard error of estimate for the 348 regression equations ranged from 16 to 340 percent. These regression equations and basin characteristics are presented in the U.S. Geological Survey (USGS) StreamStats Web-based geographic information system application. This tool allows users to click on an ungaged site on a stream in New Jersey and get the estimated flow-duration and low-flow frequency statistics. Additionally, the user can click on a streamgage or partial-record station and get the “at-site” streamflow statistics. The low-flow characteristics of a stream ultimately affect the use of the stream by humans. Specific information on the low-flow characteristics of streams is essential to water managers who deal with problems related to municipal and industrial water supply, fish and wildlife conservation, and dilution of wastewater.
Estimation of the mechanical properties of the eye through the study of its vibrational modes
2017-01-01
Measuring the eye’s mechanical properties in vivo and with minimally invasive techniques can be the key for individualized solutions to a number of eye pathologies. The development of such techniques largely relies on a computational modelling of the eyeball and, it optimally requires the synergic interplay between experimentation and numerical simulation. In Astrophysics and Geophysics the remote measurement of structural properties of the systems of their realm is performed on the basis of (helio-)seismic techniques. As a biomechanical system, the eyeball possesses normal vibrational modes encompassing rich information about its structure and mechanical properties. However, the integral analysis of the eyeball vibrational modes has not been performed yet. Here we develop a new finite difference method to compute both the spheroidal and, specially, the toroidal eigenfrequencies of the human eye. Using this numerical model, we show that the vibrational eigenfrequencies of the human eye fall in the interval 100 Hz–10 MHz. We find that compressible vibrational modes may release a trace on high frequency changes of the intraocular pressure, while incompressible normal modes could be registered analyzing the scattering pattern that the motions of the vitreous humour leave on the retina. Existing contact lenses with embebed devices operating at high sampling frequency could be used to register the microfluctuations of the eyeball shape we obtain. We advance that an inverse problem to obtain the mechanical properties of a given eye (e.g., Young’s modulus, Poisson ratio) measuring its normal frequencies is doable. These measurements can be done using non-invasive techniques, opening very interesting perspectives to estimate the mechanical properties of eyes in vivo. Future research might relate various ocular pathologies with anomalies in measured vibrational frequencies of the eye. PMID:28922351
Estimation of the mechanical properties of the eye through the study of its vibrational modes.
Aloy, M Á; Adsuara, J E; Cerdá-Durán, P; Obergaulinger, M; Esteve-Taboada, J J; Ferrer-Blasco, T; Montés-Micó, R
2017-01-01
Measuring the eye's mechanical properties in vivo and with minimally invasive techniques can be the key for individualized solutions to a number of eye pathologies. The development of such techniques largely relies on a computational modelling of the eyeball and, it optimally requires the synergic interplay between experimentation and numerical simulation. In Astrophysics and Geophysics the remote measurement of structural properties of the systems of their realm is performed on the basis of (helio-)seismic techniques. As a biomechanical system, the eyeball possesses normal vibrational modes encompassing rich information about its structure and mechanical properties. However, the integral analysis of the eyeball vibrational modes has not been performed yet. Here we develop a new finite difference method to compute both the spheroidal and, specially, the toroidal eigenfrequencies of the human eye. Using this numerical model, we show that the vibrational eigenfrequencies of the human eye fall in the interval 100 Hz-10 MHz. We find that compressible vibrational modes may release a trace on high frequency changes of the intraocular pressure, while incompressible normal modes could be registered analyzing the scattering pattern that the motions of the vitreous humour leave on the retina. Existing contact lenses with embebed devices operating at high sampling frequency could be used to register the microfluctuations of the eyeball shape we obtain. We advance that an inverse problem to obtain the mechanical properties of a given eye (e.g., Young's modulus, Poisson ratio) measuring its normal frequencies is doable. These measurements can be done using non-invasive techniques, opening very interesting perspectives to estimate the mechanical properties of eyes in vivo. Future research might relate various ocular pathologies with anomalies in measured vibrational frequencies of the eye.
The functional significance of velocity storage and its dependence on gravity.
Laurens, Jean; Angelaki, Dora E
2011-05-01
Research in the vestibular field has revealed the existence of a central process, called 'velocity storage', that is activated by both visual and vestibular rotation cues and is modified by gravity, but whose functional relevance during natural motion has often been questioned. In this review, we explore spatial orientation in the context of a Bayesian model of vestibular information processing. In this framework, deficiencies/ambiguities in the peripheral vestibular sensors are compensated for by central processing to more accurately estimate rotation velocity, orientation relative to gravity, and inertial motion. First, an inverse model of semicircular canal dynamics is used to reconstruct rotation velocity by integrating canal signals over time. However, its low-frequency bandwidth is limited to avoid accumulation of noise in the integrator. A second internal model uses this reconstructed rotation velocity to compute an internal estimate of tilt and inertial acceleration. The bandwidth of this second internal model is also restricted at low frequencies to avoid noise accumulation and drift of the tilt/translation estimator over time. As a result, low-frequency translation can be erroneously misinterpreted as tilt. The time constants of these two integrators (internal models) can be conceptualized as two Bayesian priors of zero rotation velocity and zero linear acceleration, respectively. The model replicates empirical observations like 'velocity storage' and 'frequency segregation' and explains spatial orientation (e.g., 'somatogravic') illusions. Importantly, the functional significance of this network, including velocity storage, is found during short-lasting, natural head movements, rather than at low frequencies with which it has been traditionally studied.
The functional significance of velocity storage and its dependence on gravity
Laurens, Jean
2013-01-01
Research in the vestibular field has revealed the existence of a central process, called ‘velocity storage’, that is activated by both visual and vestibular rotation cues and is modified by gravity, but whose functional relevance during natural motion has often been questioned. In this review, we explore spatial orientation in the context of a Bayesian model of vestibular information processing. In this framework, deficiencies/ambiguities in the peripheral vestibular sensors are compensated for by central processing to more accurately estimate rotation velocity, orientation relative to gravity, and inertial motion. First, an inverse model of semicircular canal dynamics is used to reconstruct rotation velocity by integrating canal signals over time. However, its low-frequency bandwidth is limited to avoid accumulation of noise in the integrator. A second internal model uses this reconstructed rotation velocity to compute an internal estimate of tilt and inertial acceleration. The bandwidth of this second internal model is also restricted at low frequencies to avoid noise accumulation and drift of the tilt/translation estimator over time. As a result, low-frequency translation can be erroneously misinterpreted as tilt. The time constants of these two integrators (internal models) can be conceptualized as two Bayesian priors of zero rotation velocity and zero linear acceleration, respectively. The model replicates empirical observations like ‘velocity storage’ and ‘frequency segregation’ and explains spatial orientation (e.g., ‘somatogravic’) illusions. Importantly, the functional significance of this network, including velocity storage, is found during short-lasting, natural head movements, rather than at low frequencies with which it has been traditionally studied. PMID:21293850
Monopole and dipole estimation for multi-frequency sky maps by linear regression
NASA Astrophysics Data System (ADS)
Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W.
2017-01-01
We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky.
Modulation frequency as a cue for auditory speed perception.
Senna, Irene; Parise, Cesare V; Ernst, Marc O
2017-07-12
Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).
Peak-flow characteristics of Virginia streams
Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute
2011-01-01
Peak-flow annual exceedance probabilities, also called probability-percent chance flow estimates, and regional regression equations are provided describing the peak-flow characteristics of Virginia streams. Statistical methods are used to evaluate peak-flow data. Analysis of Virginia peak-flow data collected from 1895 through 2007 is summarized. Methods are provided for estimating unregulated peak flow of gaged and ungaged streams. Station peak-flow characteristics identified by fitting the logarithms of annual peak flows to a Log Pearson Type III frequency distribution yield annual exceedance probabilities of 0.5, 0.4292, 0.2, 0.1, 0.04, 0.02, 0.01, 0.005, and 0.002 for 476 streamgaging stations. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression model equations for six physiographic regions to estimate regional annual exceedance probabilities at gaged and ungaged sites. Weighted peak-flow values that combine annual exceedance probabilities computed from gaging station data and from regional regression equations provide improved peak-flow estimates. Text, figures, and lists are provided summarizing selected peak-flow sites, delineated physiographic regions, peak-flow estimates, basin characteristics, regional regression model equations, error estimates, definitions, data sources, and candidate regression model equations. This study supersedes previous studies of peak flows in Virginia.
Submillimeter, millimeter, and microwave spectral line catalogue
NASA Technical Reports Server (NTRS)
Poynter, R. L.; Pickett, H. M.
1984-01-01
This report describes a computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between 0 and 10000 GHz (i.e., wavelengths longer than 30 micrometers). The catalogue can be used as a planning guide or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue has been constructed using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalogue will add more atoms and molecules and update the present listings (151 species) as new data appear. The catalogue is available from the authors as a magnetic tape recorded in card images and as a set of microfiche records.
Submillimeter, millimeter, and microwave spectral line catalogue
NASA Technical Reports Server (NTRS)
Poynter, R. L.; Pickett, H. M.
1981-01-01
A computer accessible catalogue of submillimeter, millimeter and microwave spectral lines in the frequency range between 0 and 3000 GHZ (i.e., wavelengths longer than 100 mu m) is presented which can be used a planning guide or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalogue will add more atoms and molecules and update the present listings (133 species) as new data appear. The catalogue is available as a magnetic tape recorded in card images and as a set of microfiche records.
Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.
Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J
2018-05-24
Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.
Toroidal transformer design program with application to inverter circuitry
NASA Technical Reports Server (NTRS)
Dayton, J. A., Jr.
1972-01-01
Estimates of temperature, weight, efficiency, regulation, and final dimensions are included in the output of the computer program for the design of transformers for use in the basic parallel inverter. The program, written in FORTRAN 4, selects a tape wound toroidal magnetic core and, taking temperature, materials, core geometry, skin depth, and ohmic losses into account, chooses the appropriate wire sizes and number of turns for the center tapped primary and single secondary coils. Using the program, 2- and 4-kilovolt-ampere transformers are designed for frequencies from 200 to 3200 Hz and the efficiency of a basic transistor inverter is estimated.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Paulson, K. V.
For audio-frequency magnetotelluric surveys where the signals are lightning-stroke transients, the conventional Fourier transform method often fails to produce a high quality impedance tensor. An alternative approach is to use the wavelet transform method which is capable of localizing target information simultaneously in both the temporal and frequency domains. Unlike Fourier analysis that yields an average amplitude and phase, the wavelet transform produces an instantaneous estimate of the amplitude and phase of a signal. In this paper a complex well-localized wavelet, the Morlet wavelet, has been used to transform and analyze audio-frequency magnetotelluric data. With the Morlet wavelet, the magnetotelluric impedance tensor can be computed directly in the wavelet transform domain. The lightning-stroke transients are easily identified on the dilation-translation plane. Choosing those wavelet transform values where the signals are located, a higher signal-to-noise ratio estimation of the impedance tensor can be obtained. In a test using real data, the wavelet transform showed a significant improvement in the signal-to-noise ratio over the conventional Fourier transform.
Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure
NASA Technical Reports Server (NTRS)
Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark
2009-01-01
High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.
Szyperski, Piotr D
2018-06-01
The purpose of this research was to evaluate the applicability of the fractal dimension (FD) estimators to assess lateral shearing interferometric (LSI) measurements of tear film surface quality. Retrospective recordings of tear film measured with LSI were used: 69 from healthy subjects and 41 from patients diagnosed with dry eye syndrome. Five surface quality descriptors were considered, four based on FD and a previously reported descriptor operating in a spatial frequency domain (M 2 ), presenting temporal kinetics of post-blink tear film. A set of 12 regression parameters has been extracted and analyzed for classification purposes. The classifiers are assessed in terms of receiver operating characteristics and areas under their curves (AUC). Also, the computational loads are estimated. The maximum AUC of 82.4% was achieved for M 2 , closely followed by the binary box-counting (BBC) FD estimator with AUC=78.6%. For all descriptors, statistically significant differences between the subject groups were found (p<0.05). The BBC FD estimator was characterized with the highest empirical computational efficiency that was about 30% faster than that of M 2 , while that based on the differential box-counting exhibited the lowest efficiency (4.5 times slower than the best one). Concluding, FD estimators can be utilized for quantitative assessment of tear film kinetics. They provide a viable alternative to previously used spectral counter parameters, and at the same time allow higher computational efficiency.
Distributed control system for demand response by servers
NASA Astrophysics Data System (ADS)
Hall, Joseph Edward
Within the broad topical designation of smart grid, research in demand response, or demand-side management, focuses on investigating possibilities for electrically powered devices to adapt their power consumption patterns to better match generation and more efficiently integrate intermittent renewable energy sources, especially wind. Devices such as battery chargers, heating and cooling systems, and computers can be controlled to change the time, duration, and magnitude of their power consumption while still meeting workload constraints such as deadlines and rate of throughput. This thesis presents a system by which a computer server, or multiple servers in a data center, can estimate the power imbalance on the electrical grid and use that information to dynamically change the power consumption as a service to the grid. Implementation on a testbed demonstrates the system with a hypothetical but realistic usage case scenario of an online video streaming service in which there are workloads with deadlines (high-priority) and workloads without deadlines (low-priority). The testbed is implemented with real servers, estimates the power imbalance from the grid frequency with real-time measurements of the live outlet, and uses a distributed, real-time algorithm to dynamically adjust the power consumption of the servers based on the frequency estimate and the throughput of video transcoder workloads. Analysis of the system explains and justifies multiple design choices, compares the significance of the system in relation to similar publications in the literature, and explores the potential impact of the system.
Component separation of a isotropic Gravitational Wave Background
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parida, Abhishek; Jhingan, Sanjay; Mitra, Sanjit, E-mail: abhishek@jmi.ac.in, E-mail: sanjit@iucaa.in, E-mail: sjhingan@jmi.ac.in
2016-04-01
A Gravitational Wave Background (GWB) is expected in the universe from the superposition of a large number of unresolved astrophysical sources and phenomena in the early universe. Each component of the background (e.g., from primordial metric perturbations, binary neutron stars, milli-second pulsars etc.) has its own spectral shape. Many ongoing experiments aim to probe GWB at a variety of frequency bands. In the last two decades, using data from ground-based laser interferometric gravitational wave (GW) observatories, upper limits on GWB were placed in the frequency range of 0∼ 50−100 Hz, considering one spectral shape at a time. However, one strong componentmore » can significantly enhance the estimated strength of another component. Hence, estimation of the amplitudes of the components with different spectral shapes should be done jointly. Here we propose a method for 'component separation' of a statistically isotropic background, that can, for the first time, jointly estimate the amplitudes of many components and place upper limits. The method is rather straightforward and needs negligible amount of computation. It utilises the linear relationship between the measurements and the amplitudes of the actual components, alleviating the need for a sampling based method, e.g., Markov Chain Monte Carlo (MCMC) or matched filtering, which are computationally intensive and cumbersome in a multi-dimensional parameter space. Using this formalism we could also study how many independent components can be separated using a given dataset from a network of current and upcoming ground based interferometric detectors.« less
NASA Astrophysics Data System (ADS)
Fransen, S.; Yamawaki, T.; Akagi, H.; Eggens, M.; van Baren, C.
2014-06-01
After a first estimation based on statistics, the design loads for instruments are generally estimated by coupled spacecraft/instrument sine analysis once an FE-model of the spacecraft is available. When the design loads for the instrument have been derived, the next step in the process is to estimate the random vibration environment at the instrument base and to compute the RMS load at the centre of gravity of the instrument by means of vibro-acoustic analysis. Finally the design loads of the light-weight sub-units of the instrument can be estimated through random vibration analysis at instrument level, taking into account the notches required to protect the instrument interfaces in the hard- mounted random vibration test. This paper presents the aforementioned steps of instrument and sub-units loads derivation in the preliminary design phase of the spacecraft and identifies the problems that may be encountered in terms of design load consistency between low-frequency and high-frequency environments. The SpicA FAR-infrared Instrument (SAFARI) which is currently developed for the Space Infrared Telescope for Cosmology and Astrophysics (SPICA) will be used as a guiding example.
An eigenfunction method for reconstruction of large-scale and high-contrast objects.
Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P
2007-07-01
A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.
Olson, Scott A.; with a section by Veilleux, Andrea G.
2014-01-01
This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
Perry, Charles A.
2008-01-01
Precipitation-frequency and discharge-frequency relations for small drainage basins with areas less than 32 square miles in Kansas were evaluated to reduce the uncertainty of discharge-frequency estimates. Gaged-discharge records were used to develop discharge-frequency equations for the ratio of discharge to drainage area (Q/A) values using data from basins with variable soil permeability, channel slope, and mean annual precipitation. Soil permeability and mean annual precipitation are the dominant basin characteristics in the multiple linear regression analyses. In addition, 28 discharge measurements at ungaged sites by indirect surveying methods and by velocity meters also were used in this analysis to relate precipitation-recurrence interval to discharge-recurrence interval. Precipitation-recurrence interval for each of these discharge measurements were estimated from weather-radar estimates of precipitation and from nearby raingages. Time of concentration for each basin for each of the ungaged sites was computed and used to determine the precipitation-recurrence interval based on precipitation depth and duration. The ratio of discharge/drainage area (Q/A) value for each event was then assigned to that precipitation-recurrence interval. The relation between the ratio of discharge/drainage area (Q/A) and precipitation-recurrence interval for all 28 measured events resulted in a correlation coefficient of 0.79. Using basins less than 5.4 mi2 only, the correlation decreases to 0.74. However, when basins greater than 5.4 and less than 32 mi2 are examined the relation improves to a correlation coefficient of 0.95. There were a sufficient number of discharge and radar-measured precipitation events for both the 5-year (8 events) and the 100-year (11 events) recurrence intervals to examine the effect of basin characteristics on the Q/A values for basins less than 32 mi2. At the 5-year precipitation-/discharge-recurrence interval, channel slope was a significant predictor (r=0.99) of Q/A. Permeability (r=0.68) also had a significant effect on Q/A values for the 5-year recurrence interval. At the 100-year recurrence interval, permeability, channel slope, and mean annual precipitation did not have a significant effect on Q/A; however, time of concentration was a significant factor in determining Q/A for the 100-year events with greater times of concentration resulting in lower Q/A values. Additional high-recurrence interval (5-, 10-, 25-, 50-, and 100-year) precipitation/discharge data are needed to confirm these relations suggested above. Discharge data with attendant basin-wide precipitation data from precipitation-radar estimates provides a unique opportunity to study the effects of basin characteristics on the relation between precipitation recurrence interval and discharge-recurrence interval. Discharge-frequency values from the Q/A equations, the rational method, and the Kansas discharge-frequency equations (KFFE) were compared to 28 measured weather-radar precipitation-/discharge-frequency values. The association between precipitation frequency from weather-radar estimates and the frequency of the resulting discharge was shown in these comparisons. The measured and Q/A equation computed discharges displayed the best equality from low to high discharges of the three methods. Here the slope of the line was nearly 1:1 (y=0.9844x0.9677). Comparisons with the rational method produced a slope greater than 1:1 (y=0.0722x1.235), and the KFFE equations produced a slope less than 1:1 (y=5.9103x0.7475). The Q/A equation standard error of prediction averaged 0.1346 log units for the 5.4-to 32-square-mile group and 0.0944 log units for the less than 5.4-square mile group. The KFFE standard error averaged 0.2107 log units for the less-than-30-square-mile equations. Using the Q/A equations for determining discharge frequency values for ungaged sites thus appears to be a good alternative to the other two methods because of this s
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecchio, Alberto; Wickham, Elizabeth D.L.
The Laser Interferometer Space Antenna (LISA) is expected to provide the largest observational sample of binary systems of faint subsolar mass compact objects, in particular, white-dwarfs, whose radiation is monochromatic over most of the LISA observational window. Current astrophysical estimates suggest that the instrument will be able to resolve {approx}10{sup 4} such systems, with a large fraction of them at frequencies > or approx. 3 mHz, where the wavelength of gravitational waves becomes comparable to or shorter than the LISA armlength. This affects the structure of the so-called LISA transfer function which cannot be treated as constant in this frequencymore » range: it introduces characteristic phase and amplitude modulations that depend on the source location in the sky and the emission frequency. Here we investigate the effect of the LISA transfer function on detection and parameter estimation for monochromatic sources. For signal detection we show that filters constructed by approximating the transfer function as a constant (long-wavelength approximation) introduce a negligible loss of signal-to-noise ratio--the fitting factor always exceeds 0.97--for f{<=}10 mHz, therefore in a frequency range where one would actually expect the approximation to fail. For parameter estimation, we conclude that in the range 3 mHz < or approx. f < or approx. 30 mHz the errors associated with parameter measurements differ from {approx_equal}5% up to a factor {approx}10 (depending on the actual source parameters and emission frequency) with respect to those computed using the long-wavelength approximation.« less
Guidelines for determining flood flow frequency—Bulletin 17C
England, John F.; Cohn, Timothy A.; Faber, Beth A.; Stedinger, Jery R.; Thomas, Wilbert O.; Veilleux, Andrea G.; Kiang, Julie E.; Mason, Robert R.
2018-03-29
Accurate estimates of flood frequency and magnitude are a key component of any effective nationwide flood risk management and flood damage abatement program. In addition to accuracy, methods for estimating flood risk must be uniformly and consistently applied because management of the Nation’s water and related land resources is a collaborative effort involving multiple actors including most levels of government and the private sector.Flood frequency guidelines have been published in the United States since 1967, and have undergone periodic revisions. In 1967, the U.S. Water Resources Council presented a coherent approach to flood frequency with Bulletin 15, “A Uniform Technique for Determining Flood Flow Frequencies.” The method it recommended involved fitting the log-Pearson Type III distribution to annual peak flow data by the method of moments.The first extension and update of Bulletin 15 was published in 1976 as Bulletin 17, “Guidelines for Determining Flood Flow Frequency” (Guidelines). It extended the Bulletin 15 procedures by introducing methods for dealing with outliers, historical flood information, and regional skew. Bulletin 17A was published the following year to clarify the computation of weighted skew. The next revision of the Bulletin, the Bulletin 17B, provided a host of improvements and new techniques designed to address situations that often arise in practice, including better methods for estimating and using regional skew, weighting station and regional skew, detection of outliers, and use of the conditional probability adjustment.The current version of these Guidelines are presented in this document, denoted Bulletin 17C. It incorporates changes motivated by four of the items listed as “Future Work” in Bulletin 17B and 30 years of post-17B research on flood processes and statistical methods. The updates include: adoption of a generalized representation of flood data that allows for interval and censored data types; a new method, called the Expected Moments Algorithm, which extends the method of moments so that it can accommodate interval data; a generalized approach to identification of low outliers in flood data; and an improved method for computing confidence intervals.Federal agencies are requested to use these Guidelines in all planning activities involving water and related land resources. State, local, and private organizations are encouraged to use these Guidelines to assure uniformity in the flood frequency estimates that all agencies concerned with flood risk should use for Federal planning decisions.This revision is adopted with the knowledge and understanding that review of these procedures will be ongoing. Updated methods will be adopted when warranted by experience and by examination and testing of new techniques.
Identification of boiler inlet transfer functions and estimation of system parameters
NASA Technical Reports Server (NTRS)
Miles, J. H.
1972-01-01
An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function of the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.
Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong
2018-05-19
In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.
Magnitude and frequency of floods in the United States. Part 13. Snake River basin
Thomas, C.A.; Broom, H.C.; Cummans, J.E.
1963-01-01
The magnitude of a flood of any selected frequency up to 50 years for any site on any stream in the Snake River basin can be determined by methods outlined in this report, with some limitations. The methods are not applicable for regulated streams, for drainage basins smaller than 10 or larger than 5,000 square miles, for streams fed by large springs, or for streams that have flow characteristics materially different from the regional pattern. The magnitude of a flood for a selected frequency at a given site is determined by using the appropriate composite frequency curve and the mean annual flood for the given site. The mean annual flood is computed from either a formula or a nomograph in which drainage area, mean annual precipitation, and a geographic factor are used as independent variables. The standard error of estimate for the computation of mean annual floods is plus 17 percent and minus 15 percent.Nine flood-frequency regions (A-I) are defined. In all except regions B and I, frequency relations vary with the mean altitude of the basin as well as with the geographic location; therefore, families of curves are required for 7 of the 9 flood-frequency regions.The report includes a brief description of the physiography and climate of the Snake River basin to explain the reason for the large variation in mean annual floods, which range from zero to about 27 cubic feet per second per square mile.Composite frequency curves and formulas for computing mean annual floods are based on all suitable flood data collected in the Snake River basin. Tables show the data used to derive the formula. Following the analysis of data are station descriptions and lists of peak stages and discharges for 295 gaging stations at which 5 or more years of annual flood records were collected pr'or to Sept. 30, 1957. Many flood peak data are not usable in defining the frequency curves and deriving the formula because of large diversions and regulation upstream from the gaging stations.
Techniques for Computation of Frequency Limited H∞ Norm
NASA Astrophysics Data System (ADS)
Haider, Shafiq; Ghafoor, Abdul; Imran, Muhammad; Fahad Mumtaz, Malik
2018-01-01
Traditional H ∞ norm depicts peak system gain over infinite frequency range, but many applications like filter design, model order reduction and controller design etc. require computation of peak system gain over specific frequency interval rather than infinite range. In present work, new computationally efficient techniques for computation of H ∞ norm over frequency limited interval are proposed. Proposed techniques link norm computation with maximum singular value of the system in limited frequency interval. Numerical examples are incorporated to validate the proposed concept.
Finding the Secret of Image Saliency in the Frequency Domain.
Li, Jia; Duan, Ling-Yu; Chen, Xiaowu; Huang, Tiejun; Tian, Yonghong
2015-12-01
There are two sides to every story of visual saliency modeling in the frequency domain. On the one hand, image saliency can be effectively estimated by applying simple operations to the frequency spectrum. On the other hand, it is still unclear which part of the frequency spectrum contributes the most to popping-out targets and suppressing distractors. Toward this end, this paper tentatively explores the secret of image saliency in the frequency domain. From the results obtained in several qualitative and quantitative experiments, we find that the secret of visual saliency may mainly hide in the phases of intermediate frequencies. To explain this finding, we reinterpret the concept of discrete Fourier transform from the perspective of template-based contrast computation and thus develop several principles for designing the saliency detector in the frequency domain. Following these principles, we propose a novel approach to design the saliency detector under the assistance of prior knowledge obtained through both unsupervised and supervised learning processes. Experimental results on a public image benchmark show that the learned saliency detector outperforms 18 state-of-the-art approaches in predicting human fixations.
Long term estimations of low frequency noise levels over water from an off-shore wind farm.
Bolin, Karl; Almgren, Martin; Ohlsson, Esbjörn; Karasalo, Ilkka
2014-03-01
This article focuses on computations of low frequency sound propagation from an off-shore wind farm. Two different methods for sound propagation calculations are combined with meteorological data for every 3 hours in the year 2010 to examine the varying noise levels at a reception point at 13 km distance. It is shown that sound propagation conditions play a vital role in the noise impact from the off-shore wind farm and ordinary assessment methods can become inaccurate at longer propagation distances over water. Therefore, this paper suggests that methodologies to calculate noise immission with realistic sound speed profiles need to be combined with meteorological data over extended time periods to evaluate the impact of low frequency noise from modern off-shore wind farms.
Two-Dimensional Ffowcs Williams/Hawkings Equation Solver
NASA Technical Reports Server (NTRS)
Lockard, David P.
2005-01-01
FWH2D is a Fortran 90 computer program that solves a two-dimensional (2D) version of the equation, derived by J. E. Ffowcs Williams and D. L. Hawkings, for sound generated by turbulent flow. FWH2D was developed especially for estimating noise generated by airflows around such approximately 2D airframe components as slats. The user provides input data on fluctuations of pressure, density, and velocity on some surface. These data are combined with information about the geometry of the surface to calculate histories of thickness and loading terms. These histories are fast-Fourier-transformed into the frequency domain. For each frequency of interest and each observer position specified by the user, kernel functions are integrated over the surface by use of the trapezoidal rule to calculate a pressure signal. The resulting frequency-domain signals are inverse-fast-Fourier-transformed back into the time domain. The output of the code consists of the time- and frequency-domain representations of the pressure signals at the observer positions. Because of its approximate nature, FWH2D overpredicts the noise from a finite-length (3D) component. The advantage of FWH2D is that it requires a fraction of the computation time of a 3D Ffowcs Williams/Hawkings solver.
Experimental and theoretical studies of near-ground acoustic radiation propagation in the atmosphere
NASA Astrophysics Data System (ADS)
Belov, Vladimir V.; Burkatovskaya, Yuliya B.; Krasnenko, Nikolai P.; Rakov, Aleksandr S.; Rakov, Denis S.; Shamanaeva, Liudmila G.
2017-11-01
Results of experimental and theoretical studies of the process of near-ground propagation of monochromatic acoustic radiation on atmospheric paths from a source to a receiver taking into account the contribution of multiple scattering on fluctuations of atmospheric temperature and wind velocity, refraction of sound on the wind velocity and temperature gradients, and its reflection by the underlying surface for different models of the atmosphere depending the sound frequency, coefficient of reflection from the underlying surface, propagation distance, and source and receiver altitudes are presented. Calculations were performed by the Monte Carlo method using the local estimation algorithm by the computer program developed by the authors. Results of experimental investigations under controllable conditions are compared with theoretical estimates and results of analytical calculations for the Delany-Bazley impedance model. Satisfactory agreement of the data obtained confirms the correctness of the suggested computer program.
On-Line Robust Modal Stability Prediction using Wavelet Processing
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Lind, Rick
1998-01-01
Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.
T7 lytic phage-displayed peptide libraries: construction and diversity characterization.
Krumpe, Lauren R H; Mori, Toshiyuki
2014-01-01
In this chapter, we describe the construction of T7 bacteriophage (phage)-displayed peptide libraries and the diversity analyses of random amino acid sequences obtained from the libraries. We used commercially available reagents, Novagen's T7Select system, to construct the libraries. Using a combination of biotinylated extension primer and streptavidin-coupled magnetic beads, we were able to prepare library DNA without applying gel purification, resulting in extremely high ligation efficiencies. Further, we describe the use of bioinformatics tools to characterize library diversity. Amino acid frequency and positional amino acid diversity and hydropathy are estimated using the REceptor LIgand Contacts website http://relic.bio.anl.gov. Peptide net charge analysis and peptide hydropathy analysis are conducted using the Genetics Computer Group Wisconsin Package computational tools. A comprehensive collection of the estimated number of recombinants and titers of T7 phage-displayed peptide libraries constructed in our lab is included.
NASA Astrophysics Data System (ADS)
Gaci, Said; Hachay, Olga; Zaourar, Naima
2017-04-01
One of the key elements in hydrocarbon reservoirs characterization is the S-wave velocity (Vs). Since the traditional estimating methods often fail to accurately predict this physical parameter, a new approach that takes into account its non-stationary and non-linear properties is needed. In this view, a prediction model based on complete ensemble empirical mode decomposition (CEEMD) and a multiple layer perceptron artificial neural network (MLP ANN) is suggested to compute Vs from P-wave velocity (Vp). Using a fine-to-coarse reconstruction algorithm based on CEEMD, the Vp log data is decomposed into a high frequency (HF) component, a low frequency (LF) component and a trend component. Then, different combinations of these components are used as inputs of the MLP ANN algorithm for estimating Vs log. Applications on well logs taken from different geological settings illustrate that the predicted Vs values using MLP ANN with the combinations of HF, LF and trend in inputs are more accurate than those obtained with the traditional estimating methods. Keywords: S-wave velocity, CEEMD, multilayer perceptron neural networks.
Reconstruction of Haplotype-Blocks Selected during Experimental Evolution.
Franssen, Susanne U; Barton, Nicholas H; Schlötterer, Christian
2017-01-01
The genetic analysis of experimentally evolving populations typically relies on short reads from pooled individuals (Pool-Seq). While this method provides reliable allele frequency estimates, the underlying haplotype structure remains poorly characterized. With small population sizes and adaptive variants that start from low frequencies, the interpretation of selection signatures in most Evolve and Resequencing studies remains challenging. To facilitate the characterization of selection targets, we propose a new approach that reconstructs selected haplotypes from replicated time series, using Pool-Seq data. We identify selected haplotypes through the correlated frequencies of alleles carried by them. Computer simulations indicate that selected haplotype-blocks of several Mb can be reconstructed with high confidence and low error rates, even when allele frequencies change only by 20% across three replicates. Applying this method to real data from D. melanogaster populations adapting to a hot environment, we identify a selected haplotype-block of 6.93 Mb. We confirm the presence of this haplotype-block in evolved populations by experimental haplotyping, demonstrating the power and accuracy of our haplotype reconstruction from Pool-Seq data. We propose that the combination of allele frequency estimates with haplotype information will provide the key to understanding the dynamics of adaptive alleles. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Optimized Next-Generation Sequencing Genotype-Haplotype Calling for Genome Variability Analysis
Navarro, Javier; Nevado, Bruno; Hernández, Porfidio; Vera, Gonzalo; Ramos-Onsins, Sebastián E
2017-01-01
The accurate estimation of nucleotide variability using next-generation sequencing data is challenged by the high number of sequencing errors produced by new sequencing technologies, especially for nonmodel species, where reference sequences may not be available and the read depth may be low due to limited budgets. The most popular single-nucleotide polymorphism (SNP) callers are designed to obtain a high SNP recovery and low false discovery rate but are not designed to account appropriately the frequency of the variants. Instead, algorithms designed to account for the frequency of SNPs give precise results for estimating the levels and the patterns of variability. These algorithms are focused on the unbiased estimation of the variability and not on the high recovery of SNPs. Here, we implemented a fast and optimized parallel algorithm that includes the method developed by Roesti et al and Lynch, which estimates the genotype of each individual at each site, considering the possibility to call both bases from the genotype, a single one or none. This algorithm does not consider the reference and therefore is independent of biases related to the reference nucleotide specified. The pipeline starts from a BAM file converted to pileup or mpileup format and the software outputs a FASTA file. The new program not only reduces the running times but also, given the improved use of resources, it allows its usage with smaller computers and large parallel computers, expanding its benefits to a wider range of researchers. The output file can be analyzed using software for population genetics analysis, such as the R library PopGenome, the software VariScan, and the program mstatspop for analysis considering positions with missing data. PMID:28894353
Kappa0 (κ0) estimates for hard rock SED stations in Switzerland using a high-frequency approach
NASA Astrophysics Data System (ADS)
Ktenidou, O. J.; Van Houtte, C.; Cotton, F.; Abrahamson, N. A.
2013-12-01
At high frequencies the acceleration spectrum decays rapidly. This attenuation is typically modeled by kappa (κ), the S-wave spectral decay parameter introduced by Anderson and Hough (1984). Its site-specific, zero-distance component (κ0), is crucial in the creation and adjustment of GMPEs and in the simulation of ground motion for describing high-frequency ground motion. Different groups of approaches have been identified in literature for the measurement of κ (Ktenidou et al., 2013): the high-frequency group is based on the initial definition and measures κ on the high-frequency decay of the data, while the broadband group of approaches uses the entire frequency band of the data to invert for κ, source and path parameters. Within the PEGASOS Refinement Project, κ0 values were recently computed for the 9 hardest rock stations of the Swiss Seismological Service (SED), with Vs30 values between 1000-3000 m/s. The task was performed using both groups of approaches for measuring κ. In this study we present results for the high-frequency approach. We use 2000 records of events with magnitudes between 2.0-5.5 at distances out to 200 km. We are interested not only in the mean values of κ0 at each station but also in their variability. Thus we follow 14 different ';scenarios', which are variations of the same basic approach. Each scenario consists of different criteria in terms of frequency bands used, event magnitudes, constraints on regional Q, etc. These criteria are applied when treating individual κ measurements in order to derive the overall κ0 site values. Through the scenarios we quantify the epistemic uncertainty stemming from the different possible choices made within a single approach. We find that the between-scenario uncertainty can be larger than the within-scenario uncertainty, meaning that the final estimate of κ0 depends on the choices made in the computation process. For a single station, our κ0 values can vary by a factor of 2. We infer Q values that are higher than the current regional estimates for crustal attenuation. The overall scatter of the results across all stations is large, but we see that κ0 scales with Vs30, i.e. harder rock formations have lower κ0. However, when comparing our measured κ0 values with predictions based on existing empirical κ0-Vs30 correlations, we find that the former are generally higher. This supports the notion that such correlations should be used with care and preferably accounting for the region and measurement method at hand. Furthermore, it shows that site- or region-specific measurements of κ0 should be preferred over empirical inference.
NASA Astrophysics Data System (ADS)
Liao, Yuhe; Sun, Peng; Wang, Baoxiang; Qu, Lei
2018-05-01
The appearance of repetitive transients in a vibration signal is one typical feature of faulty rolling element bearings. However, accurate extraction of these fault-related characteristic components has always been a challenging task, especially when there is interference from large amplitude impulsive noises. A frequency domain multipoint kurtosis (FDMK)-based fault diagnosis method is proposed in this paper. The multipoint kurtosis is redefined in the frequency domain and the computational accuracy is improved. An envelope autocorrelation function is also presented to estimate the fault characteristic frequency, which is used to set the frequency hunting zone of the FDMK. Then, the FDMK, instead of kurtosis, is utilized to generate a fast kurtogram and only the optimal band with maximum FDMK value is selected for envelope analysis. Negative interference from both large amplitude impulsive noise and shaft rotational speed related harmonic components are therefore greatly reduced. The analysis results of simulation and experimental data verify the capability and feasibility of this FDMK-based method
An analysis of the magnitude and frequency of floods on Oahu, Hawaii
Nakahara, R.H.
1980-01-01
An analysis of available peak-flow data for the island of Oahu, Hawaii, was made by using multiple regression techniques which related flood-frequency data to basin and climatic characteristics for 74 gaging stations on Oahu. In the analysis, several different groupings of stations were investigated, including divisions by geographic location and size of drainage area. The grouping consisting of two leeward divisions and one windward division produced the best results. Drainage basins ranged in area from 0.03 to 45.7 square miles. Equations relating flood magnitudes of selected frequencies to basin characteristics were developed for the three divisions of Oahu. These equations can be used to estimate the magnitude and frequency of floods for any site, gaged or ungaged, for any desired recurrence interval from 2 to 100 years. Data on basin characteristics, flood magnitudes for various recurrence intervals from individual station-frequency curves, and computed flood magnitudes by use of the regression equation are tabulated to provide the needed data. (USGS)
Single frequency GPS measurements in real-time artificial satellite orbit determination
NASA Astrophysics Data System (ADS)
Chiaradia, orbit determination A. P. M.; Kuga, H. K.; Prado, A. F. B. A.
2003-07-01
A simplified and compact algorithm with low computational cost providing an accuracy around tens of meters for artificial satellite orbit determination in real-time and on-board is developed in this work. The state estimation method is the extended Kalman filter. The Cowell's method is used to propagate the state vector, through a simple Runge-Kutta numerical integrator of fourth order with fixed step size. The modeled forces are due to the geopotential up to 50th order and degree of JGM-2 model. To time-update the state error covariance matrix, it is considered a simplified force model. In other words, in computing the state transition matrix, the effect of J 2 (Earth flattening) is analytically considered, which unloads dramatically the processing time. In the measurement model, the single frequency GPS pseudorange is used, considering the effects of the ionospheric delay, clock offsets of the GPS and user satellites, and relativistic effects. To validate this model, real live data are used from Topex/Poseidon satellite and the results are compared with the Topex/Poseidon Precision Orbit Ephemeris (POE) generated by NASA/JPL, for several test cases. It is concluded that this compact algorithm enables accuracies of tens of meters with such simplified force model, analytical approach for computing the transition matrix, and a cheap GPS receiver providing single frequency pseudorange measurements.
Mechanisms of Neurofeedback: A Computation-theoretic Approach.
Davelaar, Eddy J
2018-05-15
Neurofeedback training is a form of brain training in which information about a neural measure is fed back to the trainee who is instructed to increase or decrease the value of that particular measure. This paper focuses on electroencephalography (EEG) neurofeedback in which the neural measures of interest are the brain oscillations. To date, the neural mechanisms that underlie successful neurofeedback training are still unexplained. Such an understanding would benefit researchers, funding agencies, clinicians, regulatory bodies, and insurance firms. Based on recent empirical work, an emerging theory couched firmly within computational neuroscience is proposed that advocates a critical role of the striatum in modulating EEG frequencies. The theory is implemented as a computer simulation of peak alpha upregulation, but in principle any frequency band at one or more electrode sites could be addressed. The simulation successfully learns to increase its peak alpha frequency and demonstrates the influence of threshold setting - the threshold that determines whether positive or negative feedback is provided. Analyses of the model suggest that neurofeedback can be likened to a search process that uses importance sampling to estimate the posterior probability distribution over striatal representational space, with each representation being associated with a distribution of values of the target EEG band. The model provides an important proof of concept to address pertinent methodological questions about how to understand and improve EEG neurofeedback success. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Leaci, Paola; Astone, Pia; D'Antonio, Sabrina; Frasca, Sergio; Palomba, Cristiano; Piccinni, Ornella; Mastrogiovanni, Simone
2017-06-01
We describe a novel, very fast and robust, directed search incoherent method (which means that the phase information is lost) for periodic gravitational waves from neutron stars in binary systems. As a directed search, we assume the source sky position to be known with enough accuracy, but all other parameters (including orbital ones) are supposed to be unknown. We exploit the frequency modulation due to source orbital motion to unveil the signal signature by commencing from a collection of time and frequency peaks (the so-called "peakmap"). We validate our algorithm (pipeline), adding 131 artificial continuous-wave signals from pulsars in binary systems to simulated detector Gaussian noise, characterized by a power spectral density Sh=4 ×10-24 Hz-1 /2 in the frequency interval [70, 200] Hz, which is overall commensurate with the advanced detector design sensitivities. The pipeline detected 128 signals, and the weakest signal injected (added) and detected has a gravitational-wave strain amplitude of ˜10-24, assuming one month of gapless data collected by a single advanced detector. We also provide sensitivity estimations, which show that, for a single-detector data covering one month of observation time, depending on the source orbital Doppler modulation, we can detect signals with an amplitude of ˜7 ×10-25. By using three detectors, and one year of data, we would easily gain a factor 3 in sensitivity, translating into being able to detect weaker signals. We also discuss the parameter estimate proficiency of our method, as well as computational budget: sifting one month of single-detector data and 131 Hz-wide frequency range takes roughly 2.4 CPU hours. Hence, the current procedure can be readily applied in ally-sky schemes, sieving in parallel as many sky positions as permitted by the available computational power. Finally, we introduce (ongoing and future) approaches to attain sensitivity improvements and better accuracy on parameter estimates in view of the use on real advanced detector data.
Estimating short-period dynamics using an extended Kalman filter
NASA Technical Reports Server (NTRS)
Bauer, Jeffrey E.; Andrisani, Dominick
1990-01-01
An extended Kalman filter (EKF) is used to estimate the parameters of a low-order model from aircraft transient response data. The low-order model is a state space model derived from the short-period approximation of the longitudinal aircraft dynamics. The model corresponds to the pitch rate to stick force transfer function currently used in flying qualities analysis. Because of the model chosen, handling qualities information is also obtained. The parameters are estimated from flight data as well as from a six-degree-of-freedom, nonlinear simulation of the aircraft. These two estimates are then compared and the discrepancies noted. The low-order model is able to satisfactorily match both flight data and simulation data from a high-order computer simulation. The parameters obtained from the EKF analysis of flight data are compared to those obtained using frequency response analysis of the flight data. Time delays and damping ratios are compared and are in agreement. This technique demonstrates the potential to determine, in near real time, the extent of differences between computer models and the actual aircraft. Precise knowledge of these differences can help to determine the flying qualities of a test aircraft and lead to more efficient envelope expansion.
PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratiomore » (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.« less
NASA Astrophysics Data System (ADS)
Mousavi Anzehaee, Mohammad; Adib, Ahmad; Heydarzadeh, Kobra
2015-10-01
The manner of microtremor data collection and filtering operation and also the method used for processing have a considerable effect on the accuracy of estimation of dynamic soil parameters. In this paper, running variance method was used to improve the automatic detection of data sections infected by local perturbations. In this method, the microtremor data running variance is computed using a sliding window. Then the obtained signal is used to remove the ranges of data affected by perturbations from the original data. Additionally, to determinate the fundamental frequency of a site, this study has proposed a statistical characteristics-based method. Actually, statistical characteristics, such as the probability density graph and the average and the standard deviation of all the frequencies corresponding to the maximum peaks in the H/ V spectra of all data windows, are used to differentiate the real peaks from the false peaks resulting from perturbations. The methods have been applied to the data recorded for the City of Meybod in central Iran. Experimental results show that the applied methods are able to successfully reduce the effects of extensive local perturbations on microtremor data and eventually to estimate the fundamental frequency more accurately compared to other common methods.
Preliminary development of digital signal processing in microwave radiometers
NASA Technical Reports Server (NTRS)
Stanley, W. D.
1980-01-01
Topics covered involve a number of closely related tasks including: the development of several control loop and dynamic noise model computer programs for simulating microwave radiometer measurements; computer modeling of an existing stepped frequency radiometer in an effort to determine its optimum operational characteristics; investigation of the classical second order analog control loop to determine its ability to reduce the estimation error in a microwave radiometer; investigation of several digital signal processing unit designs; initiation of efforts to develop required hardware and software for implementation of the digital signal processing unit; and investigation of the general characteristics and peculiarities of digital processing noiselike microwave radiometer signals.
Computation of acoustic ressure fields produced in feline brain by high-intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Omidi, Nazanin
In 1975, Dunn et al. (JASA 58:512-514) showed that a simple relation describes the ultrasonic threshold for cavitation-induced changes in the mammalian brain. The thresholds for tissue damage were estimated for a variety of acoustic parameters in exposed feline brain. The goal of this study was to improve the estimates for acoustic pressures and intensities present in vivo during those experimental exposures by estimating them using nonlinear rather than linear theory. In our current project, the acoustic pressure waveforms produced in the brains of anesthetized felines were numerically simulated for a spherically focused, nominally f1-transducer (focal length = 13 cm) at increasing values of the source pressure at frequencies of 1, 3, and 9 MHz. The corresponding focal intensities were correlated with the experimental data of Dunn et al. The focal pressure waveforms were also computed at the location of the true maximum. For low source pressures, the computed waveforms were the same as those determined using linear theory, and the focal intensities matched experimentally determined values. For higher source pressures, the focal pressure waveforms became increasingly distorted, with the compressional amplitude of the wave becoming greater, and the rarefactional amplitude becoming lower than the values calculated using linear theory. The implications of these results for clinical exposures are discussed.
Channel Training for Analog FDD Repeaters: Optimal Estimators and Cramér-Rao Bounds
NASA Astrophysics Data System (ADS)
Wesemann, Stefan; Marzetta, Thomas L.
2017-12-01
For frequency division duplex channels, a simple pilot loop-back procedure has been proposed that allows the estimation of the UL & DL channels at an antenna array without relying on any digital signal processing at the terminal side. For this scheme, we derive the maximum likelihood (ML) estimators for the UL & DL channel subspaces, formulate the corresponding Cram\\'er-Rao bounds and show the asymptotic efficiency of both (SVD-based) estimators by means of Monte Carlo simulations. In addition, we illustrate how to compute the underlying (rank-1) SVD with quadratic time complexity by employing the power iteration method. To enable power control for the data transmission, knowledge of the channel gains is needed. Assuming that the UL & DL channels have on average the same gain, we formulate the ML estimator for the channel norm, and illustrate its robustness against strong noise by means of simulations.
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
NASA Astrophysics Data System (ADS)
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
Time-dependent, multimode interaction analysis of the gyroklystron amplifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swati, M. V., E-mail: swati.mv.ece10@iitbhu.ac.in; Chauhan, M. S.; Jain, P. K.
2016-08-15
In this paper, a time-dependent multimode nonlinear analysis for the gyroklystron amplifier has been developed by extending the analysis of gyrotron oscillators by employing the self-consistent approach. The nonlinear analysis developed here has been validated by taking into account the reported experimental results for a 32.3 GHz, three cavity, second harmonic gyroklystron operating in the TE{sub 02} mode. The analysis has been used to estimate the temporal RF growth in the operating mode as well as the nearby competing modes. Device gain and bandwidth have been computed for different drive powers and frequencies. The effect of various beam parameters, such asmore » beam voltage, beam current, and pitch factor, has also been studied. The computational results have estimated the gyroklystron saturated RF power ∼319 kW at 32.3 GHz with efficiency ∼23% and gain ∼26.3 dB with device bandwidth ∼0.027% (8 MHz) for a 70 kV, 20 A electron beam. The computed results are found to be in agreement with the experimental values within 10%.« less
Heuristic Modeling for TRMM Lifetime Predictions
NASA Technical Reports Server (NTRS)
Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.
1996-01-01
Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.
Ishii, Audrey L.; Soong, David T.; Sharpe, Jennifer B.
2010-01-01
Illinois StreamStats (ILSS) is a Web-based application for computing selected basin characteristics and flood-peak quantiles based on the most recently (2010) published (Soong and others, 2004) regional flood-frequency equations at any rural stream location in Illinois. Limited streamflow statistics including general statistics, flow durations, and base flows also are available for U.S. Geological Survey (USGS) streamflow-gaging stations. ILSS can be accessed on the Web at http://streamstats.usgs.gov/ by selecting the State Applications hyperlink and choosing Illinois from the pull-down menu. ILSS was implemented for Illinois by obtaining and projecting ancillary geographic information system (GIS) coverages; populating the StreamStats database with streamflow-gaging station data; hydroprocessing the 30-meter digital elevation model (DEM) for Illinois to conform to streams represented in the National Hydrographic Dataset 1:100,000 stream coverage; and customizing the Web-based Extensible Markup Language (XML) programs for computing basin characteristics for Illinois. The basin characteristics computed by ILSS then were compared to the basin characteristics used in the published study, and adjustments were applied to the XML algorithms for slope and basin length. Testing of ILSS was accomplished by comparing flood quantiles computed by ILSS at a an approximately random sample of 170 streamflow-gaging stations computed by ILSS with the published flood quantile estimates. Differences between the log-transformed flood quantiles were not statistically significant at the 95-percent confidence level for the State as a whole, nor by the regions determined by each equation, except for region 1, in the northwest corner of the State. In region 1, the average difference in flood quantile estimates ranged from 3.76 percent for the 2-year flood quantile to 4.27 percent for the 500-year flood quantile. The total number of stations in region 1 was small (21) and the mean difference is not large (less than one-tenth of the average prediction error for the regression-equation estimates). The sensitivity of the flood-quantile estimates to differences in the computed basin characteristics are determined and presented in tables. A test of usage consistency was conducted by having at least 7 new users compute flood quantile estimates at 27 locations. The average maximum deviation of the estimate from the mode value at each site was 1.31 percent after four mislocated sites were removed. A comparison of manual 100-year flood-quantile computations with ILSS at 34 sites indicated no statistically significant difference. ILSS appears to be an accurate, reliable, and effective tool for flood-quantile estimates.
A Computational Approach to Estimating Nondisjunction Frequency in Saccharomyces cerevisiae
Chu, Daniel B.; Burgess, Sean M.
2016-01-01
Errors segregating homologous chromosomes during meiosis result in aneuploid gametes and are the largest contributing factor to birth defects and spontaneous abortions in humans. Saccharomyces cerevisiae has long served as a model organism for studying the gene network supporting normal chromosome segregation. Measuring homolog nondisjunction frequencies is laborious, and involves dissecting thousands of tetrads to detect missegregation of individually marked chromosomes. Here we describe a computational method (TetFit) to estimate the relative contributions of meiosis I nondisjunction and random-spore death to spore inviability in wild type and mutant strains. These values are based on finding the best-fit distribution of 4, 3, 2, 1, and 0 viable-spore tetrads to an observed distribution. Using TetFit, we found that meiosis I nondisjunction is an intrinsic component of spore inviability in wild-type strains. We show proof-of-principle that the calculated average meiosis I nondisjunction frequency determined by TetFit closely matches empirically determined values in mutant strains. Using these published data sets, TetFit uncovered two classes of mutants: Class A mutants skew toward increased nondisjunction death, and include those with known defects in establishing pairing, recombination, and/or synapsis of homologous chromosomes. Class B mutants skew toward random spore death, and include those with defects in sister-chromatid cohesion and centromere function. Epistasis analysis using TetFit is facilitated by the low numbers of tetrads (as few as 200) required to compare the contributions to spore death in different mutant backgrounds. TetFit analysis does not require any special strain construction, and can be applied to previously observed tetrad distributions. PMID:26747203
Theoretical Interpretation of the Fluorescence Spectra of Toluene and P- Cresol
1994-07-01
NUMBER OF PAGES Toluene Geometrica 25 p-Cresol Fluorescence Is. PRICE CODE Spectra 17. SECURITY CLASSIFICATION 13. SECURITY CLASSIFICATION 19...State Frequencies of Toluene ................ 19 6 Computed and exp" Ground State Frequencies of p-Cresol ............... 20 7 Correction Factors for...Computed Ground State Vibrational Frequencies ....... 21 8 Computed and Corrected Excited State Frequencies of Toluene ............. 22 9 Computed and
Kodera, Sachiko; Gomez-Tames, Jose; Hirata, Akimasa; Masuda, Hiroshi; Arima, Takuji; Watanabe, Soichi
2017-01-01
The rapid development of wireless technology has led to widespread concerns regarding adverse human health effects caused by exposure to electromagnetic fields. Temperature elevation in biological bodies is an important factor that can adversely affect health. A thermophysiological model is desired to quantify microwave (MW) induced temperature elevations. In this study, parameters related to thermophysiological responses for MW exposures were estimated using an electromagnetic-thermodynamics simulation technique. To the authors’ knowledge, this is the first study in which parameters related to regional cerebral blood flow in a rat model were extracted at a high degree of accuracy through experimental measurements for localized MW exposure at frequencies exceeding 6 GHz. The findings indicate that the improved modeling parameters yield computed results that match well with the measured quantities during and after exposure in rats. It is expected that the computational model will be helpful in estimating the temperature elevation in the rat brain at multiple observation points (that are difficult to measure simultaneously) and in explaining the physiological changes in the local cortex region. PMID:28358345
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
NASA Astrophysics Data System (ADS)
Orus, R.; Prieto-Cerdeira, R.
2012-12-01
As the next Solar Maximum peak is approaching, forecasted for the late 2013, it is a good opportunity to study the ionospheric behaviour in such conditions and how this behaviour can be estimated and corrected by existing climatological models - e.g.. NeQuick, International Reference Ionosphere (IRI)- , as well as, GNSS driven models, such as Klobuchar, NeQuick Galileo, SBAS MOPS (EGNOS and WAAS corrections) and Near Real Time Global Ionospheric Maps (GIM) or regional Maps computed by different institutions. In this framework, technology advances allow to increase the computational and radio frequency channels capabilities of low-cost receivers embedded in handheld devices (such mobile phones, pads, trekking clocks, photo-cameras, etc). This may enable the active use of received ionospheric data or correction parameters from different data sources. The study is centred in understanding the ionosphere but focusing on its impact on the position error for low-cost single-frequency receivers. This study tests optimal ways to take advantage of a big amount of Real or Near Real Time ionospheric information and the way to combine various corrections in order to reach a better navigation solution. In this context, the use of real time estimation vTEC data coming from EGNOS or WAAS or near real time GIMs are used to feed the standard GPS single-frequency ionospheric correction models (Klobuchar) and get enhanced Ionospheric corrections with minor changes on the navigation software. This is done by using a Taylor expansion over the 8 coefficients send by GPS. Moreover, the same datasets are used to assimilate it in NeQuick, for broadcast coefficients, as well as, for grid assimilation. As a side product, electron density profiles in Near Real Time could be estimated with data assimilated from different ionospheric sources. Finally, the ionospheric delay estimation for multi-constellation receivers could take benefit from a common and more accurate ionospheric model being able to reduce the position error due to ionosphere. Therefore, a performance study of the different models to navigate with GNSS will be presented in different ionospheric conditions and using different sources for the model adjustment, keeping the real time capability of the receivers.
NASA Astrophysics Data System (ADS)
Schleicher, L.; Pratt, T. L.
2017-12-01
Underlying sediment can amplify ground motions during earthquakes, making site response estimates key components in seismic evaluations for building infrastructure. The horizontal-to-vertical spectral ratio (HVSR) method, using either earthquake signals or ambient noise as input, is an appealing method for estimating site response because it uses only a single seismic station rather than requiring two or more seismometers traditionally used to compute a horizontal sediment-to-bedrock spectral ratio (SBSR). A number of studies have had mixed results when comparing the accuracy of the HVSR versus SBSR methods for identifying the frequencies and amplitudes of the primary resonance peaks. Many of these studies have been carried out in areas of complex geology, such as basins with structures that can introduce 3D effects. Here we assess the effectiveness of the HVSR method by a comparison with the SBSR method and models of transfer functions in an area dominated by a flat and thin, unconsolidated sediment layer over bedrock, which should be an ideal setting for using the HVSR method. In this preliminary study, we analyze teleseismic and regional earthquake recordings from a temporary seismometer array deployed throughout Washington, DC, which is underlain by a wedge of 0 to 270 m thick layer of unconsolidated Atlantic Coastal Plain sedimentary strata. At most sites, we find a close match in the amplitudes and frequencies of large resonance peaks in horizontal ground motions at frequencies of 0.7 to 5 Hz in site response estimates using the HVSR and SBSR methods. Amplitudes of the HVSRs tend to be slightly lower than SBSRs at 3 Hz and less, but the amplitudes of the fundamental resonance peaks often match closely. The results suggest that the HVSR method could be a successful approach to consider for computing site response estimates in areas of simple shallow geology consisting of thin sedimentary layers with a strong reflector at the underlying bedrock surface. [This publication represents the views of the authors and does not necessarily represent the views of the Defense Nuclear Facilities Safety Board.
Heikkilä, Janne; Hynynen, Kullervo
2006-04-01
Many noninvasive ultrasound techniques have been developed to explore mechanical properties of soft tissues. One of these methods, Localized Harmonic Motion Imaging (LHMI), has been proposed to be used for ultrasound surgery monitoring. In LHMI, dynamic ultrasound radiation-force stimulation induces displacements in a target that can be measured using pulse-echo imaging and used to estimate the elastic properties of the target. In this initial, simulation study, the use of a one-dimensional phased array is explored for the induction of the tissue motion. The study compares three different dual-frequency and amplitude-modulated single-frequency methods for the inducing tissue motion. Simulations were computed in a homogeneous soft-tissue volume. The Rayleigh integral was used in the simulations of the ultrasound fields and the tissue displacements were computed using a finite-element method (FEM). The simulations showed that amplitude-modulated sonication using a single frequency produced the largest vibration amplitude of the target tissue. These simulations demonstrate that the properties of the tissue motion are highly dependent on the sonication method and that it is important to consider the full three-dimensional distribution of the ultrasound field for controlling the induction of tissue motion.
Sensitivity of LES results from turbine rim seals to changes in grid resolution and sector size
NASA Astrophysics Data System (ADS)
O'Mahoney, T.; Hills, N.; Chew, J.
2012-07-01
Large-Eddy Simulations (LES) were carried out for a turbine rim seal and the sensitivity of the results to changes in grid resolution and the size of the computational domain are investigated. Ingestion of hot annulus gas into the rotor-stator cavity is compared between LES results and against experiments and Unsteady Reynolds-Averaged Navier-Stokes (URANS) calculations. The LES calculations show greater ingestion than the URANS calculation and show better agreement with experiments. Increased grid resolution shows a small improvement in ingestion predictions whereas increasing the sector model size has little effect on the results. The contrast between the different CFD models is most stark in the inner cavity, where the URANS shows almost no ingestion. Particular attention is also paid to the presence of low frequency oscillations in the disc cavity. URANS calculations show such low frequency oscillations at different frequencies than the LES. The oscillations also take a very long time to develop in the LES. The results show that the difficult problem of estimating ingestion through rim seals could be overcome by using LES but that the computational requirements were still restrictive.
Computationally Efficient Radio Frequency Source Localization for Radio Interferometric Arrays
NASA Astrophysics Data System (ADS)
Steeb, J.-W.; Davidson, David B.; Wijnholds, Stefan J.
2018-03-01
Radio frequency interference (RFI) is an ever-increasing problem for remote sensing and radio astronomy, with radio telescope arrays especially vulnerable to RFI. Localizing the RFI source is the first step to dealing with the culprit system. In this paper, a new localization algorithm for interferometric arrays with low array beam sidelobes is presented. The algorithm has been adapted to work both in the near field and far field (only the direction of arrival can be recovered when the source is in the far field). In the near field the computational complexity of the algorithm is linear with search grid size compared to cubic scaling of the state-of-the-art 3-D MUltiple SIgnal Classification (MUSIC) method. The new method is as accurate as 3-D MUSIC. The trade-off is that the proposed algorithm requires a once-off a priori calculation and storing of weighting matrices. The accuracy of the algorithm is validated using data generated by low-frequency array while a hexacopter was flying around it and broadcasting a continuous-wave signal. For the flight, the mean distance between the differential GPS positions and the corresponding estimated positions of the hexacopter is 2 m at a wavelength of 6.7 m.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinefuchi, K.; Funaki, I.; Shimada, T.
Under certain conditions during rocket flights, ionized exhaust plumes from solid rocket motors may interfere with radio frequency transmissions. To understand the relevant physical processes involved in this phenomenon and establish a prediction process for in-flight attenuation levels, we attempted to measure microwave attenuation caused by rocket exhaust plumes in a sea-level static firing test for a full-scale solid propellant rocket motor. The microwave attenuation level was calculated by a coupling simulation of the inviscid-frozen-flow computational fluid dynamics of an exhaust plume and detailed analysis of microwave transmissions by applying a frequency-dependent finite-difference time-domain method with the Drude dispersion model.more » The calculated microwave attenuation level agreed well with the experimental results, except in the case of interference downstream the Mach disk in the exhaust plume. It was concluded that the coupling estimation method based on the physics of the frozen plasma flow with Drude dispersion would be suitable for actual flight conditions, although the mixing and afterburning in the plume should be considered depending on the flow condition.« less
Synthetic aperture radar target detection, feature extraction, and image formation techniques
NASA Technical Reports Server (NTRS)
Li, Jian
1994-01-01
This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.
Sonar Performance Estimation Model with Seismo-Acoustic Effects on Underwater Sound Propagation
1989-06-27
properties of 12 the bottom sediments. The ray theory is highly satisfactory to predict and explain some electromagnetic phenomena, and it is very useful in...erroneous transmission loss computations where acoustic interference occurs. However, his transmission loss calculations are made using ray theory which is...developed which treat some of these properties. Each model has its virtues and limitations. For high- frequency sound propagation the ray theory can
Gilles, Luc; Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Ellerbroek, Brent
2013-05-01
This paper discusses the performance and cost of two computationally efficient Fourier-based tomographic wavefront reconstruction algorithms for wide-field laser guide star (LGS) adaptive optics (AO). The first algorithm is the iterative Fourier domain preconditioned conjugate gradient (FDPCG) algorithm developed by Yang et al. [Appl. Opt.45, 5281 (2006)], combined with pseudo-open-loop control (POLC). FDPCG's computational cost is proportional to N log(N), where N denotes the dimensionality of the tomography problem. The second algorithm is the distributed Kalman filter (DKF) developed by Massioni et al. [J. Opt. Soc. Am. A28, 2298 (2011)], which is a noniterative spatially invariant controller. When implemented in the Fourier domain, DKF's cost is also proportional to N log(N). Both algorithms are capable of estimating spatial frequency components of the residual phase beyond the wavefront sensor (WFS) cutoff frequency thanks to regularization, thereby reducing WFS spatial aliasing at the expense of more computations. We present performance and cost analyses for the LGS multiconjugate AO system under design for the Thirty Meter Telescope, as well as DKF's sensitivity to uncertainties in wind profile prior information. We found that, provided the wind profile is known to better than 10% wind speed accuracy and 20 deg wind direction accuracy, DKF, despite its spatial invariance assumptions, delivers a significantly reduced wavefront error compared to the static FDPCG minimum variance estimator combined with POLC. Due to its nonsequential nature and high degree of parallelism, DKF is particularly well suited for real-time implementation on inexpensive off-the-shelf graphics processing units.
The effect of rare alleles on estimated genomic relationships from whole genome sequence data.
Eynard, Sonia E; Windig, Jack J; Leroy, Grégoire; van Binsbergen, Rianne; Calus, Mario P L
2015-03-12
Relationships between individuals and inbreeding coefficients are commonly used for breeding decisions, but may be affected by the type of data used for their estimation. The proportion of variants with low Minor Allele Frequency (MAF) is larger in whole genome sequence (WGS) data compared to Single Nucleotide Polymorphism (SNP) chips. Therefore, WGS data provide true relationships between individuals and may influence breeding decisions and prioritisation for conservation of genetic diversity in livestock. This study identifies differences between relationships and inbreeding coefficients estimated using pedigree, SNP or WGS data for 118 Holstein bulls from the 1000 Bull genomes project. To determine the impact of rare alleles on the estimates we compared three scenarios of MAF restrictions: variants with a MAF higher than 5%, variants with a MAF higher than 1% and variants with a MAF between 1% and 5%. We observed significant differences between estimated relationships and, although less significantly, inbreeding coefficients from pedigree, SNP or WGS data, and between MAF restriction scenarios. Computed correlations between pedigree and genomic relationships, within groups with similar relationships, ranged from negative to moderate for both estimated relationships and inbreeding coefficients, but were high between estimates from SNP and WGS (0.49 to 0.99). Estimated relationships from genomic information exhibited higher variation than from pedigree. Inbreeding coefficients analysis showed that more complete pedigree records lead to higher correlation between inbreeding coefficients from pedigree and genomic data. Finally, estimates and correlations between additive genetic (A) and genomic (G) relationship matrices were lower, and variances of the relationships were larger when accounting for allele frequencies than without accounting for allele frequencies. Using pedigree data or genomic information, and including or excluding variants with a MAF below 5% showed significant differences in relationship and inbreeding coefficient estimates. Estimated relationships and inbreeding coefficients are the basis for selection decisions. Therefore, it can be expected that using WGS instead of SNP can affect selection decision. Inclusion of rare variants will give access to the variation they carry, which is of interest for conservation of genetic diversity.
A computational method for estimating the PCR duplication rate in DNA and RNA-seq experiments.
Bansal, Vikas
2017-03-14
PCR amplification is an important step in the preparation of DNA sequencing libraries prior to high-throughput sequencing. PCR amplification introduces redundant reads in the sequence data and estimating the PCR duplication rate is important to assess the frequency of such reads. Existing computational methods do not distinguish PCR duplicates from "natural" read duplicates that represent independent DNA fragments and therefore, over-estimate the PCR duplication rate for DNA-seq and RNA-seq experiments. In this paper, we present a computational method to estimate the average PCR duplication rate of high-throughput sequence datasets that accounts for natural read duplicates by leveraging heterozygous variants in an individual genome. Analysis of simulated data and exome sequence data from the 1000 Genomes project demonstrated that our method can accurately estimate the PCR duplication rate on paired-end as well as single-end read datasets which contain a high proportion of natural read duplicates. Further, analysis of exome datasets prepared using the Nextera library preparation method indicated that 45-50% of read duplicates correspond to natural read duplicates likely due to fragmentation bias. Finally, analysis of RNA-seq datasets from individuals in the 1000 Genomes project demonstrated that 70-95% of read duplicates observed in such datasets correspond to natural duplicates sampled from genes with high expression and identified outlier samples with a 2-fold greater PCR duplication rate than other samples. The method described here is a useful tool for estimating the PCR duplication rate of high-throughput sequence datasets and for assessing the fraction of read duplicates that correspond to natural read duplicates. An implementation of the method is available at https://github.com/vibansal/PCRduplicates .
NASA Astrophysics Data System (ADS)
Moore, Andrew M.; Jacox, Michael G.; Crawford, William J.; Laughlin, Bruce; Edwards, Christopher A.; Fiechter, Jérôme
2017-08-01
Data assimilation is now used routinely in oceanography on both regional and global scales for computing ocean circulation estimates and for making ocean forecasts. Regional ocean observing systems are also expanding rapidly, and observations from a wide array of different platforms and sensor types are now available. Evaluation of the impact of the observing system on ocean circulation estimates (and forecasts) is therefore of considerable interest to the oceanographic community. In this paper, we quantify the impact of different observing platforms on estimates of the California Current System (CCS) spanning a three decade period (1980-2010). Specifically, we focus attention on several dynamically related aspects of the circulation (coastal upwelling, the transport of the California Current and the California Undercurrent, thermocline depth and eddy kinetic energy) which in many ways describe defining characteristics of the CCS. The circulation estimates were computed using a 4-dimensional variational (4D-Var) data assimilation system, and our analyses also focus on the impact of the different elements of the control vector (i.e. the initial conditions, surface forcing, and open boundary conditions) on the circulation. While the influence of each component of the control vector varies between different metrics of the circulation, the impact of each observing system across metrics is very robust. In addition, the mean amplitude of the circulation increments (i.e. the difference between the analysis and background) remains relatively stable throughout the three decade period despite the addition of new observing platforms whose impact is redistributed according to the relative uncertainty of observations from each platform. We also consider the impact of each observing platform on CCS circulation variability associated with low-frequency climate variability. The low-frequency nature of the dominant climate modes in this region allows us to track through time the impact of each observation on the circulation, and illustrates how observations from some platforms can influence the circulation up to a decade into the future.
NASA Technical Reports Server (NTRS)
Pei, Jing; Wall, John
2013-01-01
This paper describes the techniques involved in determining the aerodynamic stability derivatives for the frequency domain analysis of the Space Launch System (SLS) vehicle. Generally for launch vehicles, determination of the derivatives is fairly straightforward since the aerodynamic data is usually linear through a moderate range of angle of attack. However, if the wind tunnel data lacks proper corrections then nonlinearities and asymmetric behavior may appear in the aerodynamic database coefficients. In this case, computing the derivatives becomes a non-trivial task. Errors in computing the nominal derivatives could lead to improper interpretation regarding the natural stability of the system and tuning of the controller parameters, which would impact both stability and performance. The aerodynamic derivatives are also provided at off nominal operating conditions used for dispersed frequency domain Monte Carlo analysis. Finally, results are shown to illustrate that the effects of aerodynamic cross axis coupling can be neglected for the SLS configuration studied
An unsteady aerodynamic formulation for efficient rotor tonal noise prediction
NASA Astrophysics Data System (ADS)
Gennaretti, M.; Testa, C.; Bernardini, G.
2013-12-01
An aerodynamic/aeroacoustic solution methodology for predction of tonal noise emitted by helicopter rotors and propellers is presented. It is particularly suited for configurations dominated by localized, high-frequency inflow velocity fields as those generated by blade-vortex interactions. The unsteady pressure distributions are determined by the sectional, frequency-domain Küssner-Schwarz formulation, with downwash including the wake inflow velocity predicted by a three-dimensional, unsteady, panel-method formulation suited for the analysis of rotors operating in complex aerodynamic environments. The radiated noise is predicted through solution of the Ffowcs Williams-Hawkings equation. The proposed approach yields a computationally efficient solution procedure that may be particularly useful in preliminary design/multidisciplinary optimization applications. It is validated through comparisons with solutions that apply the airloads directly evaluated by the time-marching, panel-method formulation. The results are provided in terms of blade loads, noise signatures and sound pressure level contours. An estimation of the computational efficiency of the proposed solution process is also presented.
Soares, Ana Paula; Medeiros, José Carlos; Simões, Alberto; Machado, João; Costa, Ana; Iriarte, Álvaro; de Almeida, José João; Pinheiro, Ana P; Comesaña, Montserrat
2014-03-01
In this article, we introduce ESCOLEX, the first European Portuguese children's lexical database with grade-level-adjusted word frequency statistics. Computed from a 3.2-million-word corpus, ESCOLEX provides 48,381 word forms extracted from 171 elementary and middle school textbooks for 6- to 11-year-old children attending the first six grades in the Portuguese educational system. Like other children's grade-level databases (e.g., Carroll, Davies, & Richman, 1971; Corral, Ferrero, & Goikoetxea, Behavior Research Methods, 41, 1009-1017, 2009; Lété, Sprenger-Charolles, & Colé, Behavior Research Methods, Instruments, & Computers, 36, 156-166, 2004; Zeno, Ivens, Millard, Duvvuri, 1995), ESCOLEX provides four frequency indices for each grade: overall word frequency (F), index of dispersion across the selected textbooks (D), estimated frequency per million words (U), and standard frequency index (SFI). It also provides a new measure, contextual diversity (CD). In addition, the number of letters in the word and its part(s) of speech, number of syllables, syllable structure, and adult frequencies taken from P-PAL (a European Portuguese corpus-based lexical database; Soares, Comesaña, Iriarte, Almeida, Simões, Costa, …, Machado, 2010; Soares, Iriarte, Almeida, Simões, Costa, França, …, Comesaña, in press) are provided. ESCOLEX will be a useful tool both for researchers interested in language processing and development and for professionals in need of verbal materials adjusted to children's developmental stages. ESCOLEX can be downloaded along with this article or from http://p-pal.di.uminho.pt/about/databases .
A phase coherence approach to estimating the spatial extent of earthquakes
NASA Astrophysics Data System (ADS)
Hawthorne, Jessica C.; Ampuero, Jean-Paul
2016-04-01
We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.
Robust electroencephalogram phase estimation with applications in brain-computer interface systems.
Seraj, Esmaeil; Sameni, Reza
2017-03-01
In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.
Technical note: Design flood under hydrological uncertainty
NASA Astrophysics Data System (ADS)
Botto, Anna; Ganora, Daniele; Claps, Pierluigi; Laio, Francesco
2017-07-01
Planning and verification of hydraulic infrastructures require a design estimate of hydrologic variables, usually provided by frequency analysis, and neglecting hydrologic uncertainty. However, when hydrologic uncertainty is accounted for, the design flood value for a specific return period is no longer a unique value, but is represented by a distribution of values. As a consequence, the design flood is no longer univocally defined, making the design process undetermined. The Uncertainty Compliant Design Flood Estimation (UNCODE) procedure is a novel approach that, starting from a range of possible design flood estimates obtained in uncertain conditions, converges to a single design value. This is obtained through a cost-benefit criterion with additional constraints that is numerically solved in a simulation framework. This paper contributes to promoting a practical use of the UNCODE procedure without resorting to numerical computation. A modified procedure is proposed by using a correction coefficient that modifies the standard (i.e., uncertainty-free) design value on the basis of sample length and return period only. The procedure is robust and parsimonious, as it does not require additional parameters with respect to the traditional uncertainty-free analysis. Simple equations to compute the correction term are provided for a number of probability distributions commonly used to represent the flood frequency curve. The UNCODE procedure, when coupled with this simple correction factor, provides a robust way to manage the hydrologic uncertainty and to go beyond the use of traditional safety factors. With all the other parameters being equal, an increase in the sample length reduces the correction factor, and thus the construction costs, while still keeping the same safety level.
Eash, David A.; Barnes, Kimberlee K.
2017-01-01
A statewide study was conducted to develop regression equations for estimating six selected low-flow frequency statistics and harmonic mean flows for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include: the annual 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years, the annual 30-day mean low flow for a recurrence interval of 5 years, and the seasonal (October 1 through December 31) 1- and 7-day mean low flows for a recurrence interval of 10 years. Estimation equations also were developed for the harmonic-mean-flow statistic. Estimates of these seven selected statistics are provided for 208 U.S. Geological Survey continuous-record streamgages using data through September 30, 2006. The study area comprises streamgages located within Iowa and 50 miles beyond the State's borders. Because trend analyses indicated statistically significant positive trends when considering the entire period of record for the majority of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. The median number of years of record used to compute each of these seven selected statistics was 35. Geographic information system software was used to measure 54 selected basin characteristics for each streamgage. Following the removal of two streamgages from the initial data set, data collected for 206 streamgages were compiled to investigate three approaches for regionalization of the seven selected statistics. Regionalization, a process using statistical regression analysis, provides a relation for efficiently transferring information from a group of streamgages in a region to ungaged sites in the region. The three regionalization approaches tested included statewide, regional, and region-of-influence regressions. For the regional regression, the study area was divided into three low-flow regions on the basis of hydrologic characteristics, landform regions, and soil regions. A comparison of root mean square errors and average standard errors of prediction for the statewide, regional, and region-of-influence regressions determined that the regional regression provided the best estimates of the seven selected statistics at ungaged sites in Iowa. Because a significant number of streams in Iowa reach zero flow as their minimum flow during low-flow years, four different types of regression analyses were used: left-censored, logistic, generalized-least-squares, and weighted-least-squares regression. A total of 192 streamgages were included in the development of 27 regression equations for the three low-flow regions. For the northeast and northwest regions, a censoring threshold was used to develop 12 left-censored regression equations to estimate the 6 low-flow frequency statistics for each region. For the southern region a total of 12 regression equations were developed; 6 logistic regression equations were developed to estimate the probability of zero flow for the 6 low-flow frequency statistics and 6 generalized least-squares regression equations were developed to estimate the 6 low-flow frequency statistics, if nonzero flow is estimated first by use of the logistic equations. A weighted-least-squares regression equation was developed for each region to estimate the harmonic-mean-flow statistic. Average standard errors of estimate for the left-censored equations for the northeast region range from 64.7 to 88.1 percent and for the northwest region range from 85.8 to 111.8 percent. Misclassification percentages for the logistic equations for the southern region range from 5.6 to 14.0 percent. Average standard errors of prediction for generalized least-squares equations for the southern region range from 71.7 to 98.9 percent and pseudo coefficients of determination for the generalized-least-squares equations range from 87.7 to 91.8 percent. Average standard errors of prediction for weighted-least-squares equations developed for estimating the harmonic-mean-flow statistic for each of the three regions range from 66.4 to 80.4 percent. The regression equations are applicable only to stream sites in Iowa with low flows not significantly affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. If the equations are used at ungaged sites on regulated streams, or on streams affected by water-supply and agricultural withdrawals, then the estimates will need to be adjusted by the amount of regulation or withdrawal to estimate the actual flow conditions if that is of interest. Caution is advised when applying the equations for basins with characteristics near the applicable limits of the equations and for basins located in karst topography. A test of two drainage-area ratio methods using 31 pairs of streamgages, for the annual 7-day mean low-flow statistic for a recurrence interval of 10 years, indicates a weighted drainage-area ratio method provides better estimates than regional regression equations for an ungaged site on a gaged stream in Iowa when the drainage-area ratio is between 0.5 and 1.4. These regression equations will be implemented within the U.S. Geological Survey StreamStats web-based geographic-information-system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the seven selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these seven selected statistics are provided for the streamgage.
NASA Astrophysics Data System (ADS)
Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Jung, Tzyy-Ping; Gao, Xiaorong
2015-08-01
Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ˜33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min-1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.
Instantaneous Frequency Attribute Comparison
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Ben Horin, Y.
2013-12-01
The instantaneous seismic data attribute provides a different means of seismic interpretation, for all types of seismic data. It first came to the fore in exploration seismology in the classic paper of Taner et al (1979), entitled " Complex seismic trace analysis". Subsequently a vast literature has been accumulated on the subject, which has been given an excellent review by Barnes (1992). In this research we will compare two different methods of computation of the instantaneous frequency. The first method is based on the original idea of Taner et al (1979) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method is based on the computation of the power centroid of the time-frequency spectrum, obtained using either the Gabor Transform as computed by Margrave et al (2011) or the Stockwell Transform as described by Stockwell et al (1996). We will apply both methods to exploration seismic data and the DPRK events recorded in 2006 and 2013. In applying the classical analytic signal technique, which is known to be unstable, due to the division of the square of the envelope, we will incorporate the stabilization and smoothing method proposed in the two paper of Fomel (2007). This method employs linear inverse theory regularization coupled with the application of an appropriate data smoother. The centroid method application is straightforward and is based on the very complete theoretical analysis provided in elegant fashion by Cohen (1995). While the results of the two methods are very similar, noticeable differences are seen at the data edges. This is most likely due to the edge effects of the smoothing operator in the Fomel method, which is more computationally intensive, when an optimal search of the regularization parameter is done. An advantage of the centroid method is the intrinsic smoothing of the data, which is inherent in the sliding window application used in all Short-Time Fourier Transform methods. The Fomel technique has a larger CPU run-time, resulting from the necessary matrix inversion. Barnes, Arthur E. "The calculation of instantaneous frequency and instantaneous bandwidth.", Geophysics, 57.11 (1992): 1520-1524. Fomel, Sergey. "Local seismic attributes.", Geophysics, 72.3 (2007): A29-A33. Fomel, Sergey. "Shaping regularization in geophysical-estimation problems." , Geophysics, 72.2 (2007): R29-R36. Stockwell, Robert Glenn, Lalu Mansinha, and R. P. Lowe. "Localization of the complex spectrum: the S transform."Signal Processing, IEEE Transactions on, 44.4 (1996): 998-1001. Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. "Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063. Cohen, Leon. "Time frequency analysis theory and applications."USA: Prentice Hall, (1995). Margrave, Gary F., Michael P. Lamoureux, and David C. Henley. "Gabor deconvolution: Estimating reflectivity by nonstationary deconvolution of seismic data." Geophysics, 76.3 (2011): W15-W30.
Royston, Thomas J.; Dai, Zoujun; Chaunsali, Rajesh; Liu, Yifei; Peng, Ying; Magin, Richard L.
2011-01-01
Previous studies of the first author and others have focused on low audible frequency (<1 kHz) shear and surface wave motion in and on a viscoelastic material comprised of or representative of soft biological tissue. A specific case considered has been surface (Rayleigh) wave motion caused by a circular disk located on the surface and oscillating normal to it. Different approaches to identifying the type and coefficients of a viscoelastic model of the material based on these measurements have been proposed. One approach has been to optimize coefficients in an assumed viscoelastic model type to match measurements of the frequency-dependent Rayleigh wave speed. Another approach has been to optimize coefficients in an assumed viscoelastic model type to match the complex-valued frequency response function (FRF) between the excitation location and points at known radial distances from it. In the present article, the relative merits of these approaches are explored theoretically, computationally, and experimentally. It is concluded that matching the complex-valued FRF may provide a better estimate of the viscoelastic model type and parameter values; though, as the studies herein show, there are inherent limitations to identifying viscoelastic properties based on surface wave measurements. PMID:22225067
Subjective frequency estimates for 2,938 monosyllabic words.
Balota, D A; Pilotti, M; Cortese, M J
2001-06-01
Subjective frequency estimates for large sample of monosyllabic English words were collected from 574 young adults (undergraduate students) and from a separate group of 1,590 adults of varying ages and educational backgrounds. Estimates from the latter group were collected via the internet. In addition, 90 healthy older adults provided estimates for a random sample of 480 of these words. All groups rated words with respect to the estimated frequency of encounters of each word on a 7-point scale, ranging from never encountered to encountered several times a day. The young and older groups also rated each word with respect to the frequency of encounters in different perceptual domains (e.g., reading, hearing, writing, or speaking). The results of regression analyses indicated that objective log frequency and meaningfulness accounted for most of the variance in subjective frequency estimates, whereas neighborhood size accounted for the least amount of variance in the ratings. The predictive power of log frequency and meaningfulness were dependent on the level of subjective frequency estimates. Meaningfulness was a better predictor of subjective frequency for uncommon words, whereas log frequency was a better predictor of subjective frequency for common words. Our discussion focuses on the utility of subjective frequency estimates compared with other estimates of familiarity. The raw subjective frequency data for all words are available at http://www.artsci.wustl.edu/dbalota/labpub.html.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Robust detection, isolation and accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.
1986-01-01
The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques
Methodology for Estimation of Flood Magnitude and Frequency for New Jersey Streams
Watson, Kara M.; Schopp, Robert D.
2009-01-01
Methodologies were developed for estimating flood magnitudes at the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals for unregulated or slightly regulated streams in New Jersey. Regression equations that incorporate basin characteristics were developed to estimate flood magnitude and frequency for streams throughout the State by use of a generalized least squares regression analysis. Relations between flood-frequency estimates based on streamflow-gaging-station discharge and basin characteristics were determined by multiple regression analysis, and weighted by effective years of record. The State was divided into five hydrologically similar regions to refine the regression equations. The regression analysis indicated that flood discharge, as determined by the streamflow-gaging-station annual peak flows, is related to the drainage area, main channel slope, percentage of lake and wetland areas in the basin, population density, and the flood-frequency region, at the 95-percent confidence level. The standard errors of estimate for the various recurrence-interval floods ranged from 48.1 to 62.7 percent. Annual-maximum peak flows observed at streamflow-gaging stations through water year 2007 and basin characteristics determined using geographic information system techniques for 254 streamflow-gaging stations were used for the regression analysis. Drainage areas of the streamflow-gaging stations range from 0.18 to 779 mi2. Peak-flow data and basin characteristics for 191 streamflow-gaging stations located in New Jersey were used, along with peak-flow data for stations located in adjoining States, including 25 stations in Pennsylvania, 17 stations in New York, 16 stations in Delaware, and 5 stations in Maryland. Streamflow records for selected stations outside of New Jersey were included in the present study because hydrologic, physiographic, and geologic boundaries commonly extend beyond political boundaries. The StreamStats web application was developed cooperatively by the U.S. Geological Survey and the Environmental Systems Research Institute, Inc., and was designed for national implementation. This web application has been recently implemented for use in New Jersey. This program used in conjunction with a geographic information system provides the computation of values for selected basin characteristics, estimates of flood magnitudes and frequencies, and statistics for stream locations in New Jersey chosen by the user, whether the site is gaged or ungaged.
Computerized image analysis: estimation of breast density on mammograms
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.
2000-06-01
An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.
Cohn, T.A.; Lane, W.L.; Baier, W.G.
1997-01-01
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
NASA Astrophysics Data System (ADS)
Cohn, T. A.; Lane, W. L.; Baier, W. G.
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
Estimating the magnitude and frequency of floods in urban basins in Missouri
Southard, Rodney E.
2010-01-01
Streamgage flood-frequency analyses were done for 35 streamgages on urban streams in and adjacent to Missouri for estimation of the magnitude and frequency of floods in urban areas of Missouri. A log-Pearson Type-III distribution was fitted to the annual series of peak flow data retrieved from the U.S. Geological Survey National Water Information System. For this report, the flood frequency estimates are expressed in terms of percent annual exceedance probabilities of 50, 20, 10, 4, 2, 1, and 0.2. Of the 35 streamgages, 30 are located in Missouri. The remaining five non-Missouri streamgages were added to the dataset to improve the range and applicability of the regression analyses from the streamgage frequency analyses. Ordinary least-squares was used to determine the best set of independent variables for the regression equations. Basin characteristics selected for independent variables into the ordinary least-squares regression analyses were based on theoretical relation to flood flows, literature review of possible basin characteristics, and the ability to measure the basin characteristics using digital datasets and geographic information system technology. Results of the ordinary least-squares were evaluated on the basis of Mallow's Cp statistic, the adjusted coefficient of determination, and the statistical significance of the independent variables. The independent variables of drainage area and percent impervious area were determined to be statistically significant and readily determined from existing digital datasets. The drainage area variable was computed using the best elevation data available, either from a statewide 10-meter grid or high-resolution elevation data from urban areas. The impervious area variable was computed from the National Land Cover Dataset 2001 impervious area dataset. The National Land Cover Dataset 2001 impervious area data for each basin was compared to historical imagery and 7.5-minute topographic maps to verify the national dataset represented the urbanization of the basin at the time streamgage data were collected. Eight streamgages had less urbanization during the period of time streamflow data were collected than was shown on the 2001 dataset. The impervious area values for these eight urban basins were adjusted downward as much as 23 percent to account for the additional urbanization since the streamflow data were collected. Weighted least-squares regression techniques were used to determine the final regression equations for the statewide urban flood-frequency equations. Weighted least-squares techniques improve regression equations by adjusting for different and varying lengths in streamflow records. The final flood-frequency equations for the 50-, 20-, 10-, 4-, 2-, 1-, and 0.2-percent annual exceedance probability floods for Missouri provide a technique for estimating peak flows on urban streams at gaged and ungaged sites. The applicability of the equations is limited by the range in basin characteristics used to develop the regression equations. The range in drainage area is 0.28 to 189 square miles; range in impervious area is 2.3 to 46.0 percent. Seven of the 35 selected streamgages were used to compare the results of the existing rural and urban equations to the urban equations presented in this report for the 1-percent annual exceedance probability. Results of the comparison indicate that the estimated peak flows for the urban equation in this report ranged from 3 to 52 percent higher than the results from the rural equations. Comparing the estimated urban peak flows from this report to the existing urban equation developed in 1986 indicated the range was 255 percent lower to 10 percent higher. The overall comparison between the current (2010) and 1986 urban equations indicates a reduction in estimated peak flow values for the 1-percent annual exceedance probability flood.
Bravo, Hector R.; Jiang, Feng; Hunt, Randall J.
2002-01-01
Parameter estimation is a powerful way to calibrate models. While head data alone are often insufficient to estimate unique parameters due to model nonuniqueness, flow‐and‐heat‐transport modeling can constrain estimation and allow simultaneous estimation of boundary fluxes and hydraulic conductivity. In this work, synthetic and field models that did not converge when head data were used did converge when head and temperature were used. Furthermore, frequency domain analyses of head and temperature data allowed selection of appropriate modeling timescales. Inflows in the Wilton, Wisconsin, wetlands could be estimated over periods such as a growing season and over periods of a few days when heads were nearly steady and groundwater temperature varied during the day. While this methodology is computationally more demanding than traditional head calibration, the results gained are unobtainable using the traditional approach. These results suggest that temperature can efficiently supplement head data in systems where accurate flux calibration targets are unavailable.
NASA Technical Reports Server (NTRS)
Palumbo, Dan
2008-01-01
The lifetimes of coherent structures are derived from data correlated over a 3 sensor array sampling streamwise sidewall pressure at high Reynolds number (> 10(exp 8)). The data were acquired at subsonic, transonic and supersonic speeds aboard a Tupolev Tu-144. The lifetimes are computed from a variant of the correlation length termed the lifelength. Characteristic lifelengths are estimated by fitting a Gaussian distribution to the sensors cross spectra and are shown to compare favorably with Efimtsov s prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distributions, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data are converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize the behavior of coherent structures in the turbulent boundary layer.
Potential of neuro-fuzzy methodology to estimate noise level of wind turbines
NASA Astrophysics Data System (ADS)
Nikolić, Vlastimir; Petković, Dalibor; Por, Lip Yee; Shamshirband, Shahaboddin; Zamani, Mazdak; Ćojbašić, Žarko; Motamedi, Shervin
2016-01-01
Wind turbines noise effect became large problem because of increasing of wind farms numbers since renewable energy becomes the most influential energy sources. However, wind turbine noise generation and propagation is not understandable in all aspects. Mechanical noise of wind turbines can be ignored since aerodynamic noise of wind turbine blades is the main source of the noise generation. Numerical simulations of the noise effects of the wind turbine can be very challenging task. Therefore in this article soft computing method is used to evaluate noise level of wind turbines. The main goal of the study is to estimate wind turbine noise in regard of wind speed at different heights and for different sound frequency. Adaptive neuro-fuzzy inference system (ANFIS) is used to estimate the wind turbine noise levels.
Carrier frequency offset estimation for an acoustic-electric channel using 16 QAM modulation
NASA Astrophysics Data System (ADS)
Cunningham, Michael T.; Anderson, Leonard A.; Wilt, Kyle R.; Chakraborty, Soumya; Saulnier, Gary J.; Scarton, Henry A.
2016-05-01
Acoustic-electric channels can be used to send data through metallic barriers, enabling communications where electromagnetic signals are ineffective. This paper considers an acoustic-electric channel that is formed by mounting piezoelectric transducers on metallic barriers that are separated by a thin water layer. The transducers are coupled to the barriers using epoxy and the barriers are positioned to axially-align the PZTs, maximizing energy transfer efficiency. The electrical signals are converted by the transmitting transducers into acoustic waves, which propagate through the elastic walls and water medium to the receiving transducers. The reverberation of the acoustic signals in these channels can produce multipath distortion with a significant delay spread that introduces inter-symbol interference (ISI) into the received signal. While the multipath effects can be severe, the channel does not change rapidly which makes equalization easier. Here we implement a 16-QAM system on this channel, including a method for obtaining accurate carrier frequency offset (CFO) estimates in the presence of the quasi-static multipath propagation. A raised-power approach is considered but found to suffer from excessive data noise resulting from the ISI. An alternative approach that utilizes a pilot tone burst at the start of a data packet is used for CFO estimation and found to be effective. The autocorrelation method is used to estimate the frequency of the received burst. A real-time prototype of the 16 QAM system that uses a Texas Instruments MSP430 microcontroller-based transmitter and a personal computer-based receiver is presented along with performance results.
NASA Astrophysics Data System (ADS)
Song, H.; Huerta-Lopez, C. I.; Martinez-Cruzado, J. A.; Rodriguez-Lozoya, H. E.; Espinoza-Barreras, F.
2009-05-01
Results of an ongoing study to estimate the ground response upon weak and moderate earthquake excitations are presented. A reliable site characterization in terms of its soil properties and sub-soil layer configuration are parameters required in order to do a trustworthy estimation of the ground response upon dynamic loads. This study can be described by the following four steps: (1) Ambient noise measurements were collected at the study site where a bridge was under construction between the cities of Tijuana and Ensenada in Mexico. The time series were collected using a six channels recorder with an ADC converter of 16 bits within a maximum voltage range of ± 2.5 V, the recorder has an optional settings of: Butterworth/Bessel filters, gain and sampling rate. The sensors were a three orthogonal component (X, Y, Z) accelerometers with a sensitivity of 20 V/g, flat frequency response between DC to 200 Hz, and total full range of ±0.25 of g, (2) experimental H/V Spectral Ratios were computed to estimate the fundamental vibration frequency at the site, (3) using the time domain experimental H/V spectral ratios as well as the original recorded time series, the random decrement method was applied to estimate the fundamental frequency and damping of the site (system), and (4) finally the theoretical H/V spectral ratios were obtained by means of the stiffness matrix wave propagation method.. The interpretation of the obtained results was then finally compared with a geotechnical study available at the site.
Kim, Bernard Y.; Huber, Christian D.; Lohmueller, Kirk E.
2017-01-01
The distribution of fitness effects (DFE) has considerable importance in population genetics. To date, estimates of the DFE come from studies using a small number of individuals. Thus, estimates of the proportion of moderately to strongly deleterious new mutations may be unreliable because such variants are unlikely to be segregating in the data. Additionally, the true functional form of the DFE is unknown, and estimates of the DFE differ significantly between studies. Here we present a flexible and computationally tractable method, called Fit∂a∂i, to estimate the DFE of new mutations using the site frequency spectrum from a large number of individuals. We apply our approach to the frequency spectrum of 1300 Europeans from the Exome Sequencing Project ESP6400 data set, 1298 Danes from the LuCamp data set, and 432 Europeans from the 1000 Genomes Project to estimate the DFE of deleterious nonsynonymous mutations. We infer significantly fewer (0.38–0.84 fold) strongly deleterious mutations with selection coefficient |s| > 0.01 and more (1.24–1.43 fold) weakly deleterious mutations with selection coefficient |s| < 0.001 compared to previous estimates. Furthermore, a DFE that is a mixture distribution of a point mass at neutrality plus a gamma distribution fits better than a gamma distribution in two of the three data sets. Our results suggest that nearly neutral forces play a larger role in human evolution than previously thought. PMID:28249985
Fliege, Herbert; Grimm, Anne; Eckhardt-Henn, Annegret; Gieler, Uwe; Martin, Katharina; Klapp, Burghard F
2007-01-01
The authors surveyed physicians for frequency estimates of factitious disorder among their patients. Twenty-six physicians in independent practice and 83 senior hospital consultants in internal medicine, surgery, neurology, and dermatology participated. They completed a questionnaire including the estimated 1-year prevalence of factitious disorder among their patients. Frequency estimates averaged 1.3% (0.0001%-15%). The number of patients treated correlated negatively with frequency estimates. Dermatologists and neurologists gave the highest estimations. One-third of the physicians rated themselves as insufficiently informed. Frequency estimations did not differ by information level. The estimated frequency is substantial and comparable to earlier findings. Authors discuss clinical implications.
2002-01-01
1-hour and proposed 8-hour National Ambient Air Quality Standards. Reactive biogenic (natural) volatile organic compounds emitted from plants have...uncertainty in predicting plant species composition and frequency. Isoprene emissions computed for the study area from the project’s high-resolution...Landcover Database (BELD 2), while monoterpene and other reactive volatile organic compound emission rates were almost 26% and 28% lower, respectively
Linear and non-linear interdependence of EEG and HRV frequency bands in human sleep.
Chaparro-Vargas, Ramiro; Dissanayaka, P Chamila; Patti, Chanakya Reddy; Schilling, Claudia; Schredl, Michael; Cvetkovic, Dean
2014-01-01
The characterisation of functional interdependencies of the autonomic nervous system (ANS) stands an evergrowing interest to unveil electroencephalographic (EEG) and Heart Rate Variability (HRV) interactions. This paper presents a biosignal processing approach as a supportive computational resource in the estimation of sleep dynamics. The application of linear, non-linear methods and statistical tests upon 10 overnight polysomnographic (PSG) recordings, allowed the computation of wavelet coherence and phase locking values, in order to identify discerning features amongst the clinical healthy subjects. Our findings showed that neuronal oscillations θ, α and σ interact with cardiac power bands at mid-to-high rank of coherence and phase locking, particularly during NREM sleep stages.
EEG-based workload estimation across affective contexts
Mühl, Christian; Jeunet, Camille; Lotte, Fabien
2014-01-01
Workload estimation from electroencephalographic signals (EEG) offers a highly sensitive tool to adapt the human–computer interaction to the user state. To create systems that reliably work in the complexity of the real world, a robustness against contextual changes (e.g., mood), has to be achieved. To study the resilience of state-of-the-art EEG-based workload classification against stress we devise a novel experimental protocol, in which we manipulated the affective context (stressful/non-stressful) while the participant solved a task with two workload levels. We recorded self-ratings, behavior, and physiology from 24 participants to validate the protocol. We test the capability of different, subject-specific workload classifiers using either frequency-domain, time-domain, or both feature varieties to generalize across contexts. We show that the classifiers are able to transfer between affective contexts, though performance suffers independent of the used feature domain. However, cross-context training is a simple and powerful remedy allowing the extraction of features in all studied feature varieties that are more resilient to task-unrelated variations in signal characteristics. Especially for frequency-domain features, across-context training is leading to a performance comparable to within-context training and testing. We discuss the significance of the result for neurophysiology-based workload detection in particular and for the construction of reliable passive brain–computer interfaces in general. PMID:24971046
Broad-band seismic analysis and modeling of the 2015 Taan Fjord, Alaska landslide using Instaseis
NASA Astrophysics Data System (ADS)
Gualtieri, Lucia; Ekström, Göran
2018-06-01
We carry out a broad-band analysis of the seismic signals generated by a massive landslide that occurred near Icy Bay (Alaska) on 2015 October 17. The event generated seismic signals recorded globally. Using Instaseis, a recently developed tool for rapid computation of complete broad-band synthetic seismograms, we simulate the seismic wave propagation between the event and five seismic stations located around the landslide. By modeling the broad-band seismograms in the period band 5-200 s, we reconstruct by inversion a time-varying point force to characterize the landslide time history. We compute the broad-band spectrum of the landslide force history and find that it has a corner period of about 100 s, corresponding to the duration of sliding. In contrast with standard earthquakes, the landslide force spectrum below the corner frequency decays as ω, while the spectral amplitudes at higher frequencies is proportional to ω-2, similar to the rate of spectral decay seen in earthquakes. From the inverted force history and an estimate of the final run-out distance, we deduce the mass, the trajectory and characteristics of the landslide dynamics associated with the centre of mass, such as acceleration, velocity, displacement and friction. Inferring an effective run-out distance of ˜900 m from a satellite image, we estimate a landslide mass of ˜150 million metric tons.
Arica, Sami; Firat Ince, N; Bozkurt, Abdi; Tewfik, Ahmed H; Birand, Ahmet
2011-07-01
Pharmacological measurement of baroreflex sensitivity (BRS) is widely accepted and used in clinical practice. Following the introduction of pharmacologically induced BRS (p-BRS), alternative assessment methods eliminating the use of drugs were in the center of interest of the cardiovascular research community. In this study we investigated whether p-BRS using phenylephrine injection can be predicted from non-pharmacological time and frequency domain indices computed from electrocardiogram (ECG) and blood pressure (BP) data acquired during deep breathing. In this scheme, ECG and BP data were recorded from 16 subjects in a two-phase experiment. In the first phase the subjects performed irregular deep breaths and in the second phase the subjects received phenylephrine injection. From the first phase of the experiment, a large pool of predictors describing the local characteristic of beat-to-beat interval tachogram (RR) and systolic blood pressure (SBP) were extracted in time and frequency domains. A subset of these indices was selected using twelve subjects with an exhaustive search fused with a leave one subject out cross validation procedure. The selected indices were used to predict the p-BRS on the remaining four test subjects. A multivariate regression was used in all prediction steps. The algorithm achieved best prediction accuracy with only two features extracted from the deep breathing data, one from the frequency and the other from the time domain. The normalized L2-norm error was computed as 22.9% and the correlation coefficient was 0.97 (p=0.03). These results suggest that the p-BRS can be estimated from non-pharmacological indices computed from ECG and invasive BP data related to deep breathing. Copyright © 2011 Elsevier Ltd. All rights reserved.
Estimating the effective spatial resolution of an AVHRR time series
Meyer, D.J.
1996-01-01
A method is proposed to estimate the spatial degradation of geometrically rectified AVHRR data resulting from misregistration and off-nadir viewing, and to infer the cumulative effect of these degradations over time. Misregistrations are measured using high resolution imagery as a geometric reference, and pixel sizes are computed directly from satellite zenith angles. The influence or neighbouring features on a nominal 1 km by 1 km pixel over a given site is estimated from the above information, and expressed as a spatial distribution whose spatial frequency response is used to define an effective field-of-view (EFOV) for a time series. In a demonstration of the technique applied to images from the Conterminous U.S. AVHRR data set, an EFOV of 3·1km in the east-west dimension and 19 km in the north-south dimension was estimated for a time series accumulated over a grasslands test site.
Relationships between electronic game play, obesity, and psychosocial functioning in young men.
Wack, Elizabeth; Tantleff-Dunn, Stacey
2009-04-01
Most estimates suggest that American youth are spending a large amount of time playing video and computer games, spurring researchers to examine the impact this media has on various aspects of health and psychosocial functioning. The current study investigated relationships between frequency of electronic game play and obesity, the social/emotional context of electronic game play, and academic performance among 219 college-aged males. Current game players reported a weekly average of 9.73 hours of game play, with almost 10% of current players reporting an average of 35 hours of play per week. Results indicated that frequency of play was not significantly related to body mass index or grade point average. However, there was a significant positive correlation between frequency of play and self-reported frequency of playing when bored, lonely, or stressed. As opposed to the general conception of electronic gaming as detrimental to functioning, the results suggest that gaming among college-aged men may provide a healthy source of socialization, relaxation, and coping.
Probing the Quantum States of a Single Atom Transistor at Microwave Frequencies.
Tettamanzi, Giuseppe Carlo; Hile, Samuel James; House, Matthew Gregory; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y
2017-03-28
The ability to apply gigahertz frequencies to control the quantum state of a single P atom is an essential requirement for the fast gate pulsing needed for qubit control in donor-based silicon quantum computation. Here, we demonstrate this with nanosecond accuracy in an all epitaxial single atom transistor by applying excitation signals at frequencies up to ≈13 GHz to heavily phosphorus-doped silicon leads. These measurements allow the differentiation between the excited states of the single atom and the density of states in the one-dimensional leads. Our pulse spectroscopy experiments confirm the presence of an excited state at an energy ≈9 meV, consistent with the first excited state of a single P donor in silicon. The relaxation rate of this first excited state to the ground state is estimated to be larger than 2.5 GHz, consistent with theoretical predictions. These results represent a systematic investigation of how an atomically precise single atom transistor device behaves under radio frequency excitations.
High frequency estimation of 2-dimensional cavity scattering
NASA Astrophysics Data System (ADS)
Dering, R. S.
1984-12-01
This thesis develops a simple ray tracing approximation for the high frequency scattering from a two-dimensional cavity. Whereas many other cavity scattering algorithms are very time consuming, this method is very swift. The analytical development of the ray tracing approach is performed in great detail, and it is shown how the radar cross section (RCS) depends on the cavity's length and width along with the radar wave's angle of incidence. This explains why the cavity's RCS oscillates as a function of incident angle. The RCS of a two dimensional cavity was measured experimentally, and these results were compared to computer calculations based on the high frequency ray tracing theory. The comparison was favorable in the sense that angular RCS minima and maxima were exactly predicted even though accuracy of the RCS magnitude decreased for incident angles far off-axis. Overall, once this method is extended to three dimensions, the technique shows promise as a fast first approximation of high frequency cavity scattering.
NASA Astrophysics Data System (ADS)
Prikner, K.
1996-07-01
Three series of simultaneous pulsation measurements ( f<0.06 Hz) on the Freja satellite and at the Budkov Observatory have been spectrally processed (FFT) in 6-min intervals of Freja's transits near the local Budkov field line. Doppler-shifted, weighted spectral-peak frequencies, determined in both transverse magnetic components in the mean field-aligned coordinate system on Freja, allowed the estimation, by comparison with the stable frequency at Budkov, of fundamental frequencies of the local magnetic-field-line resonance ranged from 13 to 17 mHz in two pulsation events analyzed, with Kp=2+ to 0+. The ratio of total amplitudes of the spectral-pulsation components on the ground and on Freja at an altitude of ~1700 km (values <0.7) characterizes the transmissivity of the ionosphere. In the Pc3 frequency range this correlates well with simulation computations using models of the ionosphere under low solar activity. Acknowledgements. The Editor in Chief thanks two referees for their help in evaluating this paper.--> Correspondence to: L. Alperovich-->
Geodynamic Effects of Ocean Tides: Progress and Problems
NASA Technical Reports Server (NTRS)
Richard, Ray
1999-01-01
Satellite altimetry, particularly Topex/Poseidon, has markedly improved our knowledge of global tides, thereby allowing significant progress on some longstanding problems in geodynamics. This paper reviews some of that progress. Emphasis is given to global-scale problems, particularly those falling within the mandate of the new IERS Special Bureau for Tides: angular momentum, gravitational field, geocenter motion. For this discussion I use primarily the new ocean tide solutions GOT99.2, CSR4.0, and TPXO.4 (for which G. Egbert has computed inverse-theoretic error estimates), and I concentrate on new results in angular momentum and gravity and their solid-earth implications. One example is a new estimate of the effective tidal Q at the M_2 frequency, based on combining these ocean models with tidal estimates from satellite laser ranging. Three especially intractable problems are also addressed: (1) determining long-period tides in the Arctic [large unknown effect on the inertia tensor, particularly for Mf]; (2) determining the global psi_l tide [large unknown effect on interpretations of gravimetry for the near-diurnal free wobble]; and (3) determining radiational tides [large unknown temporal variations at important frequencies]. Problems (2) and (3) are related.
Bayesian inference on EMRI signals using low frequency approximations
NASA Astrophysics Data System (ADS)
Ali, Asad; Christensen, Nelson; Meyer, Renate; Röver, Christian
2012-07-01
Extreme mass ratio inspirals (EMRIs) are thought to be one of the most exciting gravitational wave sources to be detected with LISA. Due to their complicated nature and weak amplitudes the detection and parameter estimation of such sources is a challenging task. In this paper we present a statistical methodology based on Bayesian inference in which the estimation of parameters is carried out by advanced Markov chain Monte Carlo (MCMC) algorithms such as parallel tempering MCMC. We analysed high and medium mass EMRI systems that fall well inside the low frequency range of LISA. In the context of the Mock LISA Data Challenges, our investigation and results are also the first instance in which a fully Markovian algorithm is applied for EMRI searches. Results show that our algorithm worked well in recovering EMRI signals from different (simulated) LISA data sets having single and multiple EMRI sources and holds great promise for posterior computation under more realistic conditions. The search and estimation methods presented in this paper are general in their nature, and can be applied in any other scenario such as AdLIGO, AdVIRGO and Einstein Telescope with their respective response functions.
Pappachan, Bobby K; Caesarendra, Wahyu; Tjahjowidodo, Tegoeh; Wijaya, Tomi
2017-01-01
Process monitoring using indirect methods relies on the usage of sensors. Using sensors to acquire vital process related information also presents itself with the problem of big data management and analysis. Due to uncertainty in the frequency of events occurring, a higher sampling rate is often used in real-time monitoring applications to increase the chances of capturing and understanding all possible events related to the process. Advanced signal processing methods are used to further decipher meaningful information from the acquired data. In this research work, power spectrum density (PSD) of sensor data acquired at sampling rates between 40–51.2 kHz was calculated and the corelation between PSD and completed number of cycles/passes is presented. Here, the progress in number of cycles/passes is the event this research work intends to classify and the algorithm used to compute PSD is Welch’s estimate method. A comparison between Welch’s estimate method and statistical methods is also discussed. A clear co-relation was observed using Welch’s estimate to classify the number of cycles/passes. The paper also succeeds in classifying vibration signal generated by the spindle from the vibration signal acquired during finishing process. PMID:28556809
Respiratory rate estimation from the built-in cameras of smartphones and tablets.
Nam, Yunyoung; Lee, Jinseok; Chon, Ki H
2014-04-01
This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates. Overall, the VFCDM method provided the best results for accuracy (smaller median error), consistency (smaller interquartile range of the median value), and computational efficiency (less than 0.5 s on 1 min of data using a MATLAB implementation) to extract breathing rates that varied from 12 to 36 breaths/min. The AR method provided the least accurate respiratory rate estimation among the three methods. This work illustrates that both heart rates and normal breathing rates can be accurately derived from a video signal obtained from smartphones, an MP3 player and tablets with or without a flashlight.
NASA Astrophysics Data System (ADS)
Sun, Xiucong; Han, Chao; Chen, Pei
2017-10-01
Spaceborne Global Positioning System (GPS) receivers are widely used for orbit determination of low-Earth-orbiting (LEO) satellites. With the improvement of measurement accuracy, single-frequency receivers are recently considered for low-cost small satellite missions. In this paper, a Schmidt-Kalman filter which processes single-frequency GPS measurements and broadcast ephemerides is proposed for real-time precise orbit determination of LEO satellites. The C/A code and L1 phase are linearly combined to eliminate the first-order ionospheric effects. Systematic errors due to ionospheric delay residual, group delay variation, phase center variation, and broadcast ephemeris errors, are lumped together into a noise term, which is modeled as a first-order Gauss-Markov process. In order to reduce computational complexity, the colored noise is considered rather than estimated in the orbit determination process. This ensures that the covariance matrix accurately represents the distribution of estimation errors without increasing the dimension of the state vector. The orbit determination algorithm is tested with actual flight data from the single-frequency GPS receiver onboard China's small satellite Shi Jian-9A (SJ-9A). Preliminary results using a 7-h data arc on October 25, 2012 show that the Schmidt-Kalman filter performs better than the standard Kalman filter in terms of accuracy.
Transmit beamforming for optimal second-harmonic generation.
Hoilund-Kaupang, Halvard; Masoy, Svein-Erik
2011-08-01
A simulation study of transmit ultrasound beams from several transducer configurations is conducted to compare second-harmonic imaging at 3.5 MHz and 11 MHz. Second- harmonic generation and the ability to suppress near field echoes are compared. Each transducer configuration is defined by a chosen f-number and focal depth, and the transmit pressure is estimated to not exceed a mechanical index of 1.2. The medium resembles homogeneous muscle tissue with nonlinear elasticity and power-law attenuation. To improve computational efficiency, the KZK equation is utilized, and all transducers are circular-symmetric. Previous literature shows that second-harmonic generation is proportional to the square of the transmit pressure, and that transducer configurations with different transmit frequencies, but equal aperture and focal depth in terms of wavelengths, generate identical second-harmonic fields in terms of shape. Results verify this for a medium with attenuation f1. For attenuation f1.1, deviations are found, and the high frequency subsequently performs worse than the low frequency. The results suggest that high frequencies are less able to suppress near-field echoes in the presence of a heterogeneous body wall than low frequencies.
A multimodal approach to estimating vigilance using EEG and forehead EOG.
Zheng, Wei-Long; Lu, Bao-Liang
2017-04-01
Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals. The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.
Tape recorder effects on jitter and shimmer extraction.
Doherty, E T; Shipp, T
1988-09-01
To test for possible contamination of acoustic analyses by record/reproduce systems, five sine waves of fixed frequency and amplitude were sampled directly by a computer and recorded simultaneously on four different tape formats (audio and FM reel-to-reel, audio cassette, and video cassette using pulse code modulation). Recordings were digitized on playback and with the direct samples analyzed for fundamental frequency, amplitude, jitter, and shimmer using a zero crossing interpolation scheme. Distortion introduced by any of the data acquisition systems is negligible when extracting average fundamental frequency or average amplitude. For jitter and shimmer estimation, direct sampling or the use of a video cassette recorder with pulse code modulation are clearly superior. FM recorders, although not quite as accurate, provide a satisfactory alternative to those methods. Audio reel-to-reel recordings are marginally adequate for jitter analysis whereas audio cassette recorders can introduce jitter and shimmer values that are greater than some reported values for normal talkers.
Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels
NASA Astrophysics Data System (ADS)
Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.
2016-06-01
We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.
Hallmann, Kirstin; Breuer, Christoph
2014-01-01
This article analyses sport participation using a demographic-economic model which was extended by the construct 'social recognition'. Social recognition was integrated into the model on the understanding that it is the purpose of each individual to maximise his or her utility. A computer-assisted telephone interview survey was conducted in the city of Rheinberg, Germany, producing an overall sample of n=1934. Regression analyses were performed to estimate the impact of socio-demographic, economic determinants, and social recognition on sport participation. The results suggest that various socio-economic factors and social recognition are important determinants of sport participation on the one hand, and on sport frequency on the other. Social recognition plays a significant yet different role for both sport participation and sport frequency. While friends' involvement with sport influences one's sport participation, parents' involvement with sport influences one's sport frequency.
Mousa-Pasandi, Mohammad E; Zhuge, Qunbi; Xu, Xian; Osman, Mohamed M; El-Sahn, Ziad A; Chagnon, Mathieu; Plant, David V
2012-07-02
We experimentally investigate the performance of a low-complexity non-iterative phase noise induced inter-carrier interference (ICI) compensation algorithm in reduced-guard-interval dual-polarization coherent-optical orthogonal-frequency-division-multiplexing (RGI-DP-CO-OFDM) transport systems. This interpolation-based ICI compensator estimates the time-domain phase noise samples by a linear interpolation between the CPE estimates of the consecutive OFDM symbols. We experimentally study the performance of this scheme for a 28 Gbaud QPSK RGI-DP-CO-OFDM employing a low cost distributed feedback (DFB) laser. Experimental results using a DFB laser with the linewidth of 2.6 MHz demonstrate 24% and 13% improvement in transmission reach with respect to the conventional equalizer (CE) in presence of weak and strong dispersion-enhanced-phase-noise (DEPN), respectively. A brief analysis of the computational complexity of this scheme in terms of the number of required complex multiplications is provided. This practical approach does not suffer from error propagation while enjoying low computational complexity.
NASA Astrophysics Data System (ADS)
Ishihara, Koichi; Asai, Yusuke; Kudo, Riichi; Ichikawa, Takeo; Takatori, Yasushi; Mizoguchi, Masato
2013-12-01
Multiuser multiple-input multiple-output (MU-MIMO) has been proposed as a means to improve spectrum efficiency for various future wireless communication systems. This paper reports indoor experimental results obtained for a newly developed and implemented downlink (DL) MU-MIMO orthogonal frequency division multiplexing (OFDM) transceiver for gigabit wireless local area network systems in the microwave band. In the transceiver, the channel state information (CSI) is estimated at each user and fed back to an access point (AP) on a real-time basis. At the AP, the estimated CSI is used to calculate the transmit beamforming weight for DL MU-MIMO transmission. This paper also proposes a recursive inverse matrix computation scheme for computing the transmit weight in real time. Experiments with the developed transceiver demonstrate its feasibility in a number of indoor scenarios. The experimental results clarify that DL MU-MIMO-OFDM transmission can achieve a 972-Mbit/s transmission data rate with simple digital signal processing of single-antenna users in an indoor environment.
Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins
NASA Technical Reports Server (NTRS)
Brenner, Marty; Lind, Rick
1998-01-01
Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2016-12-01
We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multidisciplinary optimization of an HSCT wing using a response surface methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giunta, A.A.; Grossman, B.; Mason, W.H.
1994-12-31
Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less
Nugis, V Yu; Khvostunov, I K; Goloub, E V; Kozlova, M G; Nadejinal, N M; Galstian, I A
2015-01-01
The method for retrospective dose assessment based on the analysis of cell distribution by the number of dicentrics and unstable aberrations using a special computer program was earlier developed based on the data about the persons irradiated as a result of the accident at the Chernobyl nuclear power plant. This method was applied for the same purpose for data processing of repeated cytogenetic studies of the patients exposed to γ-, γ-β- or γ-neutron radiation in various situations. As a whole, this group was followed up in more distant periods (17-50 years) after exposure than Chernobyl patients (up to 25 years). The use for retrospective dose assessment of the multiple regression equations obtained for the Chernobyl cohort showed that the equation, which includes computer recovered estimate of the dose and the time elapsed after irradiation, was generally unsatisfactory (r = 0.069 at p = 0.599). Similar equations with recovered estimate of the dose and frequency of abnormal chromosomes in a distant period or with all three parameters as variables gave better results (r = 0.686 at p = 0.000000001 and r = 0.542 at p = 0.000008, respectively).
On-line, adaptive state estimator for active noise control
NASA Technical Reports Server (NTRS)
Lim, Tae W.
1994-01-01
Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the active noise control system using feedback control. This research investigates the development of an adaptive, on-line state estimator using a neural network concept to conduct active noise control. In this research, an algorithm has been developed that can be used to estimate displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are estimated with reasonable accuracy at any locations on the structure. The accuracy of the response estimates depends on the number of modes incorporated in the estimates and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical active noise control application.
Agreement Between Computed Tomography and Pathologic Nodule Counts in Colorectal Lung Metastases.
Marron, M Carmen; Lora, David; Gamez, Pablo; Rivas, Juan J; Embun, Raul; Molins, Laureano; de la Cruz, Javier
2016-01-01
Computed tomography is the most common technique used to estimate the number of pulmonary metastases and their resectability. A lack of agreement between radiologic and surgical pathologic findings could potentially lead to incomplete resection or to rejection of patients for potentially curative treatments. The objective of this study was to estimate the disagreement between the number of radiologic lesions and the number of histologically confirmed malignant lesions excised from patients with pulmonary metastases from colorectal cancer. This was a multicenter longitudinal study using a national registry. All patients underwent open surgery for pulmonary metastasectomy. Radiologic unilateral involvement was documented in 345 of 404 patients (85%); 253 (73%) presented with single nodules. The radiologic and malignant pathologic findings were concordant in 316 (78%) patients. The two independent predictors of discordance between computed tomography and the number of pathologic metastases were the bilateral involvement and the number of radiologic nodules. This model explained 28% of the variability in the disagreement frequency and discriminated between agreement and disagreement in 85% of the patients. Discrepancies increased with the nodule count with an odds ratio of 6.17 (95% confidence interval, 4.08 to 9.33) per additional nodule. For similar nodule counts, a lower disagreement frequency was observed among bilateral cases (odds ratio, 0.2; 95% confidence interval, 0.07 to 0.55). Differences between the radiologic and pathologic findings were documented in 1 of every 5 patients. The correlation was very accurate in patients with single radiologic nodules. However, half of the patients with more nodules showed discrepancies. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Solving large tomographic linear systems: size reduction and error estimation
NASA Astrophysics Data System (ADS)
Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust
2014-10-01
We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.
Spherical Pendulum Small Oscillations for Slewing Crane Motion
Perig, Alexander V.; Stadnik, Alexander N.; Deriglazov, Alexander I.
2014-01-01
The present paper focuses on the Lagrange mechanics-based description of small oscillations of a spherical pendulum with a uniformly rotating suspension center. The analytical solution of the natural frequencies' problem has been derived for the case of uniform rotation of a crane boom. The payload paths have been found in the inertial reference frame fixed on earth and in the noninertial reference frame, which is connected with the rotating crane boom. The numerical amplitude-frequency characteristics of the relative payload motion have been found. The mechanical interpretation of the terms in Lagrange equations has been outlined. The analytical expression and numerical estimation for cable tension force have been proposed. The numerical computational results, which correlate very accurately with the experimental observations, have been shown. PMID:24526891
Novelo-Casanova, D. A.; Lee, W.H.K.
1991-01-01
Using simulated coda waves, the resolution of the single-scattering model to extract coda Q (Qc) and its power law frequency dependence was tested. The back-scattering model of Aki and Chouet (1975) and the single isotropic-scattering model of Sato (1977) were examined. The results indicate that: (1) The input Qc models are reasonably well approximated by the two methods; (2) almost equal Qc values are recovered when the techniques sample the same coda windows; (3) low Qc models are well estimated in the frequency domain from the early and late part of the coda; and (4) models with high Qc values are more accurately extracted from late code measurements. ?? 1991 Birkha??user Verlag.
Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM
NASA Astrophysics Data System (ADS)
Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng
2015-07-01
We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anand, Sampurn; Mohanty, Subhendra; Dey, Ujjal Kumar, E-mail: sampurn@prl.res.in, E-mail: ujjal@cts.iitkgp.ernet.in, E-mail: mohanty@prl.res.in
Cosmological phase transitions can be a source of Stochastic Gravitational Wave (SGW) background. Apart from the dynamics of the phase transition, the characteristic frequency and the fractional energy density Ω{sub gw} of the SGW depends upon the temperature of the transition. In this article, we compute the SGW spectrum in the light of QCD equation of state provided by the lattice results. We find that the inclusion of trace anomaly from lattice QCD, enhances the SGW signal generated during QCD phase transition by ∼ 50% and the peak frequency of the QCD era SGW are shifted higher by ∼ 25%more » as compared to the earlier estimates without trace anomaly. This result is extremely significant for testing the phase transition dynamics near QCD epoch.« less
Measurement methods and algorithms for comparison of local and remote clocks
NASA Technical Reports Server (NTRS)
Levine, Judah
1993-01-01
Several methods for characterizing the performance of clocks with special emphasis on using calibration information that is acquired via an unreliable or noisy channel is discussed. Time-domain variance estimators and frequency-domain techniques such as cross-spectral analysis are discussed. Each of these methods has advantages and limitations that will be illustrated using data obtained via GPS, ACTS, and other methods. No one technique will be optimum for all of these analyses, and some of these problems cannot be completely characterized by any of the techniques discussed. The inverse problem of communicating frequency and time corrections to a real-time steered clock are also discussed. Methods were developed to mitigate the disastrous problems of data corruption and loss of computer control.
NASA Astrophysics Data System (ADS)
Majewski, Kurt
2018-03-01
Exact solutions of the Bloch equations with T1 - and T2 -relaxation terms for piecewise constant magnetic fields are numerically challenging. We therefore investigate an approximation for the achieved magnetization in which rotations and relaxations are split into separate operations. We develop an estimate for its accuracy and explicit first and second order derivatives with respect to the complex excitation radio frequency voltages. In practice, the deviation between an exact solution of the Bloch equations and this rotation relaxation splitting approximation seems negligible. Its computation times are similar to exact solutions without relaxation terms. We apply the developed theory to numerically optimize radio frequency excitation waveforms with T1 - and T2 -relaxations in several examples.
Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source
NASA Astrophysics Data System (ADS)
Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.
2014-06-01
To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.
Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric
2016-01-01
Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927
Acharya, Ashith B
2014-05-01
Dentin translucency measurement is an easy yet relatively accurate approach to postmortem age estimation. Translucency area represents a two-dimensional change and may reflect age variations better than length. Manually measuring area is challenging and this paper proposes a new digital method using commercially available computer hardware and software. Area and length were measured on 100 tooth sections (age range, 19-82 years) of 250 μm thickness. Regression analysis revealed lower standard error of estimate and higher correlation with age for length than for area (R = 0.62 vs. 0.60). However, test of regression formulae on a control sample (n = 33, 21-85 years) showed smaller mean absolute difference (8.3 vs. 8.8 years) and greater frequency of smaller errors (73% vs. 67% age estimates ≤ ± 10 years) for area than for length. These suggest that digital area measurements of root translucency may be used as an alternative to length in forensic age estimation. © 2014 American Academy of Forensic Sciences.
Quantification of peripheral and central blood pressure variability using a time-frequency method.
Kouchaki, Z; Butlin, M; Qasem, A; Avolio, A P
2016-08-01
Systolic blood pressure variability (BPV) is associated with cardiovascular events. As the beat-to-beat variation of blood pressure is due to interaction of several cardiovascular control systems operating with different response times, assessment of BPV by spectral analysis using the continuous measurement of arterial pressure in the finger is used to differentiate the contribution of these systems in regulating blood pressure. However, as baroreceptors are centrally located, this study considered applying a continuous aortic pressure signal estimated noninvasively from finger pressure for assessment of systolic BPV by a time-frequency method using Short Time Fourier Transform (STFT). The average ratio of low frequency and high frequency power band (LF PB /HF PB ) was computed by time-frequency decomposition of peripheral systolic pressure (pSBP) and derived central aortic systolic blood pressure (cSBP) in 30 healthy subjects (25-62 years) as a marker of balance between cardiovascular control systems contributing in low and high frequency blood pressure variability. The results showed that the BPV assessed from finger pressure (pBPV) overestimated the BPV values compared to that assessed from central aortic pressure (cBPV) for identical cardiac cycles (P<;0.001), with the overestimation being greater at higher power.
A Modified Normalization Technique for Frequency-Domain Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Hwang, J.; Jeong, G.; Min, D. J.; KIM, S.; Heo, J. Y.
2016-12-01
Full waveform inversion (FWI) is a technique to estimate subsurface material properties minimizing the misfit function built with residuals between field and modeled data. To achieve computational efficiency, FWI has been performed in the frequency domain by carrying out modeling in the frequency domain, whereas observed data (time-series data) are Fourier-transformed.One of the main drawbacks of seismic FWI is that it easily gets stuck in local minima because of lacking of low-frequency data. To compensate for this limitation, damped wavefields are used, as in the Laplace-domain waveform inversion. Using damped wavefield in FWI plays a role in generating low-frequency components and help recover long-wavelength structures. With these newly generated low-frequency components, we propose a modified frequency-normalization technique, which has an effect of boosting contribution of low-frequency components to model parameter update.In this study, we introduce the modified frequency-normalization technique which effectively amplifies low-frequency components of damped wavefields. Our method is demonstrated for synthetic data for the SEG/EAGE salt model. AcknowledgementsThis work was supported by the Korea Institute of Energy Technology Evaluation and Planning(KETEP) and the Ministry of Trade, Industry & Energy(MOTIE) of the Republic of Korea (No. 20168510030830) and by the Dual Use Technology Program, granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea.
Nunes, J M; Riccio, M E; Buhler, S; Di, D; Currat, M; Ries, F; Almada, A J; Benhamamouch, S; Benitez, O; Canossi, A; Fadhlaoui-Zid, K; Fischer, G; Kervaire, B; Loiseau, P; de Oliveira, D C M; Papasteriades, C; Piancatelli, D; Rahal, M; Richard, L; Romero, M; Rousseau, J; Spiroski, M; Sulcebe, G; Middleton, D; Tiercy, J-M; Sanchez-Mazas, A
2010-07-01
During the 15th International Histocompatibility and Immunogenetics Workshop (IHIWS), 14 human leukocyte antigen (HLA) laboratories participated in the Analysis of HLA Population Data (AHPD) project where 18 new population samples were analyzed statistically and compared with data available from previous workshops. To that aim, an original methodology was developed and used (i) to estimate frequencies by taking into account ambiguous genotypic data, (ii) to test for Hardy-Weinberg equilibrium (HWE) by using a nested likelihood ratio test involving a parameter accounting for HWE deviations, (iii) to test for selective neutrality by using a resampling algorithm, and (iv) to provide explicit graphical representations including allele frequencies and basic statistics for each series of data. A total of 66 data series (1-7 loci per population) were analyzed with this standard approach. Frequency estimates were compliant with HWE in all but one population of mixed stem cell donors. Neutrality testing confirmed the observation of heterozygote excess at all HLA loci, although a significant deviation was established in only a few cases. Population comparisons showed that HLA genetic patterns were mostly shaped by geographic and/or linguistic differentiations in Africa and Europe, but not in America where both genetic drift in isolated populations and gene flow in admixed populations led to a more complex genetic structure. Overall, a fruitful collaboration between HLA typing laboratories and population geneticists allowed finding useful solutions to the problem of estimating gene frequencies and testing basic population diversity statistics on highly complex HLA data (high numbers of alleles and ambiguities), with promising applications in either anthropological, epidemiological, or transplantation studies.
Gutenkunst, Ryan N.; Hernandez, Ryan D.; Williamson, Scott H.; Bustamante, Carlos D.
2009-01-01
Demographic models built from genetic data play important roles in illuminating prehistorical events and serving as null models in genome scans for selection. We introduce an inference method based on the joint frequency spectrum of genetic variants within and between populations. For candidate models we numerically compute the expected spectrum using a diffusion approximation to the one-locus, two-allele Wright-Fisher process, involving up to three simultaneous populations. Our approach is a composite likelihood scheme, since linkage between neutral loci alters the variance but not the expectation of the frequency spectrum. We thus use bootstraps incorporating linkage to estimate uncertainties for parameters and significance values for hypothesis tests. Our method can also incorporate selection on single sites, predicting the joint distribution of selected alleles among populations experiencing a bevy of evolutionary forces, including expansions, contractions, migrations, and admixture. We model human expansion out of Africa and the settlement of the New World, using 5 Mb of noncoding DNA resequenced in 68 individuals from 4 populations (YRI, CHB, CEU, and MXL) by the Environmental Genome Project. We infer divergence between West African and Eurasian populations 140 thousand years ago (95% confidence interval: 40–270 kya). This is earlier than other genetic studies, in part because we incorporate migration. We estimate the European (CEU) and East Asian (CHB) divergence time to be 23 kya (95% c.i.: 17–43 kya), long after archeological evidence places modern humans in Europe. Finally, we estimate divergence between East Asians (CHB) and Mexican-Americans (MXL) of 22 kya (95% c.i.: 16.3–26.9 kya), and our analysis yields no evidence for subsequent migration. Furthermore, combining our demographic model with a previously estimated distribution of selective effects among newly arising amino acid mutations accurately predicts the frequency spectrum of nonsynonymous variants across three continental populations (YRI, CHB, CEU). PMID:19851460
Groth, Kevin M; Granata, Kevin P
2008-06-01
Due to the mathematical complexity of current musculoskeletal spine models, there is a need for computationally efficient models of the intervertebral disk (IVD). The aim of this study is to develop a mathematical model that will adequately describe the motion of the IVD under axial cyclic loading as well as maintain computational efficiency for use in future musculoskeletal spine models. Several studies have successfully modeled the creep characteristics of the IVD using the three-parameter viscoelastic standard linear solid (SLS) model. However, when the SLS model is subjected to cyclic loading, it underestimates the load relaxation, the cyclic modulus, and the hysteresis of the human lumbar IVD. A viscoelastic standard nonlinear solid (SNS) model was used to predict the response of the human lumbar IVD subjected to low-frequency vibration. Nonlinear behavior of the SNS model was simulated by a strain-dependent elastic modulus on the SLS model. Parameters of the SNS model were estimated from experimental load deformation and stress-relaxation curves obtained from the literature. The SNS model was able to predict the cyclic modulus of the IVD at frequencies of 0.01 Hz, 0.1 Hz, and 1 Hz. Furthermore, the SNS model was able to quantitatively predict the load relaxation at a frequency of 0.01 Hz. However, model performance was unsatisfactory when predicting load relaxation and hysteresis at higher frequencies (0.1 Hz and 1 Hz). The SLS model of the lumbar IVD may require strain-dependent elastic and viscous behavior to represent the dynamic response to compressive strain.
Motion estimation of magnetic resonance cardiac images using the Wigner-Ville and hough transforms
NASA Astrophysics Data System (ADS)
Carranza, N.; Cristóbal, G.; Bayerl, P.; Neumann, H.
2007-12-01
Myocardial motion analysis and quantification is of utmost importance for analyzing contractile heart abnormalities and it can be a symptom of a coronary artery disease. A fundamental problem in processing sequences of images is the computation of the optical flow, which is an approximation of the real image motion. This paper presents a new algorithm for optical flow estimation based on a spatiotemporal-frequency (STF) approach. More specifically it relies on the computation of the Wigner-Ville distribution (WVD) and the Hough Transform (HT) of the motion sequences. The latter is a well-known line and shape detection method that is highly robust against incomplete data and noise. The rationale of using the HT in this context is that it provides a value of the displacement field from the STF representation. In addition, a probabilistic approach based on Gaussian mixtures has been implemented in order to improve the accuracy of the motion detection. Experimental results in the case of synthetic sequences are compared with an implementation of the variational technique for local and global motion estimation, where it is shown that the results are accurate and robust to noise degradations. Results obtained with real cardiac magnetic resonance images are presented.
NASA Astrophysics Data System (ADS)
Carranza, N.; Cristóbal, G.; Sroubek, F.; Ledesma-Carbayo, M. J.; Santos, A.
2006-08-01
Myocardial motion analysis and quantification is of utmost importance for analyzing contractile heart abnormalities and it can be a symptom of a coronary artery disease. A fundamental problem in processing sequences of images is the computation of the optical flow, which is an approximation to the real image motion. This paper presents a new algorithm for optical flow estimation based on a spatiotemporal-frequency (STF) approach, more specifically on the computation of the Wigner-Ville distribution (WVD) and the Hough Transform (HT) of the motion sequences. The later is a well-known line and shape detection method very robust against incomplete data and noise. The rationale of using the HT in this context is because it provides a value of the displacement field from the STF representation. In addition, a probabilistic approach based on Gaussian mixtures has been implemented in order to improve the accuracy of the motion detection. Experimental results with synthetic sequences are compared against an implementation of the variational technique for local and global motion estimation, where it is shown that the results obtained here are accurate and robust to noise degradations. Real cardiac magnetic resonance images have been tested and evaluated with the current method.
Inverse Force Determination on a Small Scale Launch Vehicle Model Using a Dynamic Balance
NASA Technical Reports Server (NTRS)
Ngo, Christina L.; Powell, Jessica M.; Ross, James C.
2017-01-01
A launch vehicle can experience large unsteady aerodynamic forces in the transonic regime that, while usually only lasting for tens of seconds during launch, could be devastating if structural components and electronic hardware are not designed to account for them. These aerodynamic loads are difficult to experimentally measure and even harder to computationally estimate. The current method for estimating buffet loads is through the use of a few hundred unsteady pressure transducers and wind tunnel test. Even with a large number of point measurements, the computed integrated load is not an accurate enough representation of the total load caused by buffeting. This paper discusses an attempt at using a dynamic balance to experimentally determine buffet loads on a generic scale hammer head launch vehicle model tested at NASA Ames Research Center's 11' x 11' transonic wind tunnel. To use a dynamic balance, the structural characteristics of the model needed to be identified so that the natural modal response could be and removed from the aerodynamic forces. A finite element model was created on a simplified version of the model to evaluate the natural modes of the balance flexures, assist in model design, and to compare to experimental data. Several modal tests were conducted on the model in two different configurations to check for non-linearity, and to estimate the dynamic characteristics of the model. The experimental results were used in an inverse force determination technique with a psuedo inverse frequency response function. Due to the non linearity, the model not being axisymmetric, and inconsistent data between the two shake tests from different mounting configuration, it was difficult to create a frequency response matrix that satisfied all input and output conditions for wind tunnel configuration to accurately predict unsteady aerodynamic loads.
NASA Astrophysics Data System (ADS)
Castellarin, A.; Montanari, A.; Brath, A.
2002-12-01
The study derives Regional Depth-Duration-Frequency (RDDF) equations for a wide region of northern-central Italy (37,200 km 2) by following an adaptation of the approach originally proposed by Alila [WRR, 36(7), 2000]. The proposed RDDF equations have a rather simple structure and allow an estimation of the design storm, defined as the rainfall depth expected for a given storm duration and recurrence interval, in any location of the study area for storm durations from 1 to 24 hours and for recurrence intervals up to 100 years. The reliability of the proposed RDDF equations represents the main concern of the study and it is assessed at two different levels. The first level considers the gauged sites and compares estimates of the design storm obtained with the RDDF equations with at-site estimates based upon the observed annual maximum series of rainfall depth and with design storm estimates resulting from a regional estimator recently developed for the study area through a Hierarchical Regional Approach (HRA) [Gabriele and Arnell, WRR, 27(6), 1991]. The second level performs a reliability assessment of the RDDF equations for ungauged sites by means of a jack-knife procedure. Using the HRA estimator as a reference term, the jack-knife procedure assesses the reliability of design storm estimates provided by the RDDF equations for a given location when dealing with the complete absence of pluviometric information. The results of the analysis show that the proposed RDDF equations represent practical and effective computational means for producing a first guess of the design storm at the available raingauges and reliable design storm estimates for ungauged locations. The first author gratefully acknowledges D.H. Burn for sponsoring the submission of the present abstract.
Blind source separation and localization using microphone arrays
NASA Astrophysics Data System (ADS)
Sun, Longji
The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.
NASA Technical Reports Server (NTRS)
Wu, Andy
1995-01-01
Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.
Bearings fault detection in helicopters using frequency readjustment and cyclostationary analysis
NASA Astrophysics Data System (ADS)
Girondin, Victor; Pekpe, Komi Midzodzi; Morel, Herve; Cassar, Jean-Philippe
2013-07-01
The objective of this paper is to propose a vibration-based automated framework dealing with local faults occurring on bearings in the transmission of a helicopter. The knowledge of the shaft speed and kinematic computation provide theoretical frequencies that reveal deteriorations on the inner and outer races, on the rolling elements or on the cage. In practice, the theoretical frequencies of bearing faults may be shifted. They may also be masked by parasitical frequencies because the numerous noisy vibrations and the complexity of the transmission mechanics make the signal spectrum very profuse. Consequently, detection methods based on the monitoring of the theoretical frequencies may lead to wrong decisions. In order to deal with this drawback, we propose to readjust the fault frequencies from the theoretical frequencies using the redundancy introduced by the harmonics. The proposed method provides the confidence index of the readjusted frequency. Minor variations in shaft speed may induce random jitters. The change of the contact surface or of the transmission path brings also a random component in amplitude and phase. These random components in the signal destroy spectral localization of frequencies and thus hide the fault occurrence in the spectrum. Under the hypothesis that these random signals can be modeled as cyclostationary signals, the envelope spectrum can reveal that hidden patterns. In order to provide an indicator estimating fault severity, statistics are proposed. They make the hypothesis that the harmonics at the readjusted frequency are corrupted with an additive normally distributed noise. In this case, the statistics computed from the spectra are chi-square distributed and a signal-to-noise indicator is proposed. The algorithms are then tested with data from two test benches and from flight conditions. The bearing type and the radial load are the main differences between the experiences on the benches. The fault is mainly visible in the spectrum for the radially constrained bearing and only visible in the envelope spectrum for the "load-free" bearing. Concerning results in flight conditions, frequency readjustment demonstrates good performances when applied on the spectrum, showing that a fully automated bearing decision procedure is applicable for operational helicopter monitoring.
Sando, Roy; Sando, Steven K.; McCarthy, Peter M.; Dutton, DeAnn M.
2016-04-05
The U.S. Geological Survey (USGS), in cooperation with the Montana Department of Natural Resources and Conservation, completed a study to update methods for estimating peak-flow frequencies at ungaged sites in Montana based on peak-flow data at streamflow-gaging stations through water year 2011. The methods allow estimation of peak-flow frequencies (that is, peak-flow magnitudes, in cubic feet per second, associated with annual exceedance probabilities of 66.7, 50, 42.9, 20, 10, 4, 2, 1, 0.5, and 0.2 percent) at ungaged sites. The annual exceedance probabilities correspond to 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Regional regression analysis is a primary focus of Chapter F of this Scientific Investigations Report, and regression equations for estimating peak-flow frequencies at ungaged sites in eight hydrologic regions in Montana are presented. The regression equations are based on analysis of peak-flow frequencies and basin characteristics at 537 streamflow-gaging stations in or near Montana and were developed using generalized least squares regression or weighted least squares regression.All of the data used in calculating basin characteristics that were included as explanatory variables in the regression equations were developed for and are available through the USGS StreamStats application (http://water.usgs.gov/osw/streamstats/) for Montana. StreamStats is a Web-based geographic information system application that was created by the USGS to provide users with access to an assortment of analytical tools that are useful for water-resource planning and management. The primary purpose of the Montana StreamStats application is to provide estimates of basin characteristics and streamflow characteristics for user-selected ungaged sites on Montana streams. The regional regression equations presented in this report chapter can be conveniently solved using the Montana StreamStats application.Selected results from this study were compared with results of previous studies. For most hydrologic regions, the regression equations reported for this study had lower mean standard errors of prediction (in percent) than the previously reported regression equations for Montana. The equations presented for this study are considered to be an improvement on the previously reported equations primarily because this study (1) included 13 more years of peak-flow data; (2) included 35 more streamflow-gaging stations than previous studies; (3) used a detailed geographic information system (GIS)-based definition of the regulation status of streamflow-gaging stations, which allowed better determination of the unregulated peak-flow records that are appropriate for use in the regional regression analysis; (4) included advancements in GIS and remote-sensing technologies, which allowed more convenient calculation of basin characteristics and investigation of many more candidate basin characteristics; and (5) included advancements in computational and analytical methods, which allowed more thorough and consistent data analysis.This report chapter also presents other methods for estimating peak-flow frequencies at ungaged sites. Two methods for estimating peak-flow frequencies at ungaged sites located on the same streams as streamflow-gaging stations are described. Additionally, envelope curves relating maximum recorded annual peak flows to contributing drainage area for each of the eight hydrologic regions in Montana are presented and compared to a national envelope curve. In addition to providing general information on characteristics of large peak flows, the regional envelope curves can be used to assess the reasonableness of peak-flow frequency estimates determined using the regression equations.
System and method for controlling power consumption in a computer system based on user satisfaction
Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok
2014-04-22
Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.
Spectral performance of Square Kilometre Array Antennas - II. Calibration performance
NASA Astrophysics Data System (ADS)
Trott, Cathryn M.; de Lera Acedo, Eloy; Wayth, Randall B.; Fagnoni, Nicolas; Sutinjo, Adrian T.; Wakley, Brett; Punzalan, Chris Ivan B.
2017-09-01
We test the bandpass smoothness performance of two prototype Square Kilometre Array (SKA) SKA1-Low log-periodic dipole antennas, SKALA2 and SKALA3 ('SKA Log-periodic Antenna'), and the current dipole from the Murchison Widefield Array (MWA) precursor telescope. Throughout this paper, we refer to the output complex-valued voltage response of an antenna when connected to a low-noise amplifier, as the dipole bandpass. In Paper I, the bandpass spectral response of the log-periodic antenna being developed for the SKA1-Low was estimated using numerical electromagnetic simulations and analysed using low-order polynomial fittings, and it was compared with the HERA antenna against the delay spectrum metric. In this work, realistic simulations of the SKA1-Low instrument, including frequency-dependent primary beam shapes and array configuration, are used with a weighted least-squares polynomial estimator to assess the ability of a given prototype antenna to perform the SKA Epoch of Reionisation (EoR) statistical experiments. This work complements the ideal estimator tolerances computed for the proposed EoR science experiments in Trott & Wayth, with the realized performance of an optimal and standard estimation (calibration) procedure. With a sufficient sky calibration model at higher frequencies, all antennas have bandpasses that are sufficiently smooth to meet the tolerances described in Trott & Wayth to perform the EoR statistical experiments, and these are primarily limited by an adequate sky calibration model and the thermal noise level in the calibration data. At frequencies of the Cosmic Dawn, which is of principal interest to SKA as one of the first next-generation telescopes capable of accessing higher redshifts, the MWA dipole and SKALA3 antenna have adequate performance, while the SKALA2 design will impede the ability to explore this era.
Beat frequency interference pattern characteristics study
NASA Technical Reports Server (NTRS)
Ott, J. H.; Rice, J. S.
1981-01-01
The frequency spectra and corresponding beat frequencies created by the relative motions between multiple Solar Power Satellites due to solar wind, lunar gravity, etc. were analyzed. The results were derived mathematically and verified through computer simulation. Frequency spectra plots were computer generated. Detailed computations were made for the seven following locations in the continental US: Houston, Tx.; Seattle, Wa.; Miami, Fl.; Chicago, Il.; New York, NY; Los Angeles, Ca.; and Barberton, Oh.
Searching for periodic sources with LIGO. II. Hierarchical searches
NASA Astrophysics Data System (ADS)
Brady, Patrick R.; Creighton, Teviet
2000-04-01
The detection of quasi-periodic sources of gravitational waves requires the accumulation of signal to noise over long observation times. This represents the most difficult data analysis problem facing experimenters with detectors such as those at LIGO. If not removed, Earth-motion induced Doppler modulations and intrinsic variations of the gravitational-wave frequency make the signals impossible to detect. These effects can be corrected (removed) using a parametrized model for the frequency evolution. In a previous paper, we introduced such a model and computed the number of independent parameter space points for which corrections must be applied to the data stream in a coherent search. Since this number increases with the observation time, the sensitivity of a search for continuous gravitational-wave signals is computationally bound when data analysis proceeds at a similar rate to data acquisition. In this paper, we extend the formalism developed by Brady et al. [Phys. Rev. D 57, 2101 (1998)], and we compute the number of independent corrections Np(ΔT,N) required for incoherent search strategies. These strategies rely on the method of stacked power spectra-a demodulated time series is divided into N segments of length ΔT, each segment is Fourier transformed, a power spectrum is computed, and the N spectra are summed up. This method is incoherent; phase information is lost from segment to segment. Nevertheless, power from a signal with fixed frequency (in the corrected time series) is accumulated in a single frequency bin, and amplitude signal to noise accumulates as ~N1/4 (assuming the segment length ΔT is held fixed). For fixed available computing power, there are optimal values for N and ΔT which maximize the sensitivity of a search in which data analysis takes a total time NΔT. We estimate that the optimal sensitivity of an all-sky search that uses incoherent stacks is a factor of 2-4 better than achieved using coherent Fourier transforms, assuming the same available computing power; incoherent methods are computationally efficient at exploring large parameter spaces. We also consider a two-stage hierarchical search in which candidate events from a search using short data segments are followed up in a search using longer data segments. This hierarchical strategy yields a further 20-60 % improvement in sensitivity in all-sky (or directed) searches for old (>=1000 yr) slow (<=200 Hz) pulsars, and for young (>=40 yr) fast (<=1000 Hz) pulsars. Assuming enhanced LIGO detectors (LIGO-II) and 1012 flops of effective computing power, we examine the sensitivity to sources in three specialized classes. A limited area search for pulsars in the Galactic core would detect objects with gravitational ellipticities of ɛ>~5×10-6 at 200 Hz; such limits provide information about the strength of the crust in neutron stars. Gravitational waves emitted by unstable r-modes of newborn neutron stars would be detected out to distances of ~8 Mpc, if the r-modes saturate at a dimensionless amplitude of order unity and an optical supernova provides the position of the source on the sky. In searches targeting low-mass x-ray binary systems (in which accretion-driven spin up is balanced by gravitational-wave spin down), it is important to use information from electromagnetic observations to determine the orbital parameters as accurately as possible. An estimate of the difficulty of these searches suggests that objects with x-ray fluxes exceeding 2×10-8 erg cm-2 s-1 would be detected using the enhanced interferometers in their broadband configuration. This puts Sco X-1 on the verge of detectability in a broadband search; the amplitude signal to noise would be increased by a factor of order ~5-10 by operating the interferometer in a signal-recycled, narrow-band configuration. Further work is needed to determine the optimal search strategy when limited information is available about the frequency evolution of a source in a targeted search.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.
Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less
Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.
2015-11-19
Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less
Sando, Steven K.; Sando, Roy; McCarthy, Peter M.; Dutton, DeAnn M.
2016-04-05
The climatic conditions of the specific time period during which peak-flow data were collected at a given streamflow-gaging station (hereinafter referred to as gaging station) can substantially affect how well the peak-flow frequency (hereinafter referred to as frequency) results represent long-term hydrologic conditions. Differences in the timing of the periods of record can result in substantial inconsistencies in frequency estimates for hydrologically similar gaging stations. Potential for inconsistency increases with decreasing peak-flow record length. The representativeness of the frequency estimates for a short-term gaging station can be adjusted by various methods including weighting the at-site results in association with frequency estimates from regional regression equations (RREs) by using the Weighted Independent Estimates (WIE) program. Also, for gaging stations that cannot be adjusted by using the WIE program because of regulation or drainage areas too large for application of RREs, frequency estimates might be improved by using record extension procedures, including a mixed-station analysis using the maintenance of variance type I (MOVE.1) procedure. The U.S. Geological Survey, in cooperation with the Montana Department of Transportation and the Montana Department of Natural Resources and Conservation, completed a study to provide adjusted frequency estimates for selected gaging stations through water year 2011.The purpose of Chapter D of this Scientific Investigations Report is to present adjusted frequency estimates for 504 selected streamflow-gaging stations in or near Montana based on data through water year 2011. Estimates of peak-flow magnitudes for the 66.7-, 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities are reported. These annual exceedance probabilities correspond to the 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.The at-site frequency estimates were adjusted by weighting with frequency estimates from RREs using the WIE program for 438 selected gaging stations in Montana. These 438 selected gaging stations (1) had periods of record less than or equal to 40 years, (2) represented unregulated or minor regulation conditions, and (3) had drainage areas less than about 2,750 square miles.The weighted-average frequency estimates obtained by weighting with RREs generally are considered to provide improved frequency estimates. In some cases, there are substantial differences among the at-site frequency estimates, the regression-equation frequency estimates, and the weighted-average frequency estimates. In these cases, thoughtful consideration should be applied when selecting the appropriate frequency estimate. Some factors that might be considered when selecting the appropriate frequency estimate include (1) whether the specific gaging station has peak-flow characteristics that distinguish it from most other gaging stations used in developing the RREs for the hydrologic region; and (2) the length of the peak-flow record and the general climatic characteristics during the period when the peak-flow data were collected. For critical structure-design applications, a conservative approach would be to select the higher of the at-site frequency estimate and the weighted-average frequency estimate.The mixed-station MOVE.1 procedure generally was applied in cases where three or more gaging stations were located on the same large river and some of the gaging stations could not be adjusted using the weighted-average method because of regulation or drainage areas too large for application of RREs. The mixed-station MOVE.1 procedure was applied to 66 selected gaging stations on 19 large rivers.The general approach for using mixed-station record extension procedures to adjust at-site frequencies involved (1) determining appropriate base periods for the gaging stations on the large rivers, (2) synthesizing peak-flow data for the gaging stations with incomplete peak-flow records during the base periods by using the mixed-station MOVE.1 procedure, and (3) conducting frequency analysis on the combined recorded and synthesized peak-flow data for each gaging station. Frequency estimates for the combined recorded and synthesized datasets for 66 gaging stations with incomplete peak-flow records during the base periods are presented. The uncertainties in the mixed-station record extension results are difficult to directly quantify; thus, it is important to understand the intended use of the estimated frequencies based on analysis of the combined recorded and synthesized datasets. The estimated frequencies are considered general estimates of frequency relations among gaging stations on the same stream channel that might be expected if the gaging stations had been gaged during the same long-term base period. However, because the mixed-station record extension procedures involve secondary statistical analysis with accompanying errors, the uncertainty of the frequency estimates is larger than would be obtained by collecting systematic records for the same number of years in the base period.
Mitigating leakage errors due to cavity modes in a superconducting quantum computer
NASA Astrophysics Data System (ADS)
McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.
2018-07-01
A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.
Harmonic analysis of the DTU10 global gravity anomalies
NASA Astrophysics Data System (ADS)
Abrykosov, O.; Förste, Ch.; Gruber, Ch.; Shako, R.; Barthelmes, F.
2012-04-01
We have computed the Earth's gravity models to degree/order 5400 and 10800 (in terms of the ellipsoidal and spherical harmonics) from a rigorous integration of the 2'x2' and 1'x1' global grids of gravity anomalies provided by the Danish Technical University (DTU). The gravity signal recovered from the DTU10 data shows 1) a strong dependency on the truncation of the EGM2008 gravity model which were used to fill-in land areas in the DTU10 grids and 2) an irregular behaviour at frequencies behind the resolution of the EGM2008. We discuss the gravity signal and its accuracy estimation computed from the complete DTU10 grids as well as separately from the data over land and ocean areas.
Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram
2015-01-01
This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.
Wang, Yu; Jiang, Jingfeng
2018-01-01
Shear wave elastography (SWE) has been used to measure viscoelastic properties for characterization of fibrotic livers. In this technique, external mechanical vibrations or acoustic radiation forces are first transmitted to the tissue being imaged to induce shear waves. Ultrasonically measured displacement/velocity is then utilized to obtain elastographic measurements related to shear wave propagation. Using an open-source wave simulator, k-Wave, we conducted a case study of the relationship between plane shear wave measurements and the microstructure of fibrotic liver tissues. Particularly, three different virtual tissue models (i.e., a histology-based model, a statistics-based model, and a simple inclusion model) were used to represent underlying microstructures of fibrotic liver tissues. We found underlying microstructures affected the estimated mean group shear wave speed (SWS) under the plane shear wave assumption by as much as 56%. Also, the elastic shear wave scattering resulted in frequency-dependent attenuation coefficients and introduced changes in the estimated group SWS. Similarly, the slope of group SWS changes with respect to the excitation frequency differed as much as 78% among three models investigated. This new finding may motivate further studies examining how elastic scattering may contribute to frequency-dependent shear wave dispersion and attenuation in biological tissues.
A new method of hybrid frequency hopping signals selection and blind parameter estimation
NASA Astrophysics Data System (ADS)
Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian
2018-04-01
Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.
Discrimination of Mixed Taste Solutions using Ultrasonic Wave and Soft Computing
NASA Astrophysics Data System (ADS)
Kojima, Yohichiro; Kimura, Futoshi; Mikami, Tsuyoshi; Kitama, Masataka
In this study, ultrasonic wave acoustic properties of mixed taste solutions were investigated, and the possibility of taste sensing based on the acoustical properties obtained was examined. In previous studies, properties of solutions were discriminated based on sound velocity, amplitude and frequency characteristics of ultrasonic waves propagating through the five basic taste solutions and marketed beverages. However, to make this method applicable to beverages that contain many taste substances, further studies are required. In this paper, the waveform of an ultrasonic wave with frequency of approximately 5 MHz propagating through mixed solutions composed of sweet and salty substance was measured. As a result, differences among solutions were clearly observed as differences in their properties. Furthermore, these mixed solutions were discriminated by a self-organizing neural network. The ratio of volume in their mixed solutions was estimated by a distance-type fuzzy reasoning method. Therefore, the possibility of taste sensing was shown by using ultrasonic wave acoustic properties and the soft computing, such as the self-organizing neural network and the distance-type fuzzy reasoning method.
NASA Astrophysics Data System (ADS)
Gardezi, A.; Umer, T.; Butt, F.; Young, R. C. D.; Chatwin, C. R.
2016-04-01
A spatial domain optimal trade-off Maximum Average Correlation Height (SPOT-MACH) filter has been previously developed and shown to have advantages over frequency domain implementations in that it can be made locally adaptive to spatial variations in the input image background clutter and normalised for local intensity changes. The main concern for using the SPOT-MACH is its computationally intensive nature. However in the past enhancements techniques were proposed for the SPOT-MACH to make its execution time comparable to its frequency domain counterpart. In this paper a novel approach is discussed which uses VANET parameters coupled with the SPOT-MACH in order to minimise the extensive processing of the large video dataset acquired from the Pakistan motorways surveillance system. The use of VANET parameters gives us an estimation criterion of the flow of traffic on the Pakistan motorway network and acts as a precursor to the training algorithm. The use of VANET in this scenario would contribute heavily towards the computational complexity minimization of the proposed monitoring system.
Kubo, N
1995-04-01
To improve the quality of single-photon emission computed tomographic (SPECT) images, a restoration filter has been developed. This filter was designed according to practical "least squares filter" theory. It is necessary to know the object power spectrum and the noise power spectrum. The power spectrum is estimated from the power spectrum of a projection, when the high-frequency power spectrum of a projection is adequately approximated as a polynomial exponential expression. A study of the restoration with the filter based on a projection power spectrum was conducted, and compared with that of the "Butterworth" filtering method (cut-off frequency of 0.15 cycles/pixel), and "Wiener" filtering (signal-to-noise power spectrum ratio was a constant). Normalized mean-squared errors (NMSE) of the phantom, two line sources located in a 99mTc filled cylinder, were used. NMSE of the "Butterworth" filter, "Wiener" filter, and filtering based on a power spectrum were 0.77, 0.83, and 0.76 respectively. Clinically, brain SPECT images utilizing this new restoration filter improved the contrast. Thus, this filter may be useful in diagnosis of SPECT images.
Computational and Experimental Unsteady Pressures for Alternate SLS Booster Nose Shapes
NASA Technical Reports Server (NTRS)
Braukmann, Gregory J.; Streett, Craig L.; Kleb, William L.; Alter, Stephen J.; Murphy, Kelly J.; Glass, Christopher E.
2015-01-01
Delayed Detached Eddy Simulation (DDES) predictions of the unsteady transonic flow about a Space Launch System (SLS) configuration were made with the Fully UNstructured Three-Dimensional (FUN3D) flow solver. The computational predictions were validated against results from a 2.5% model tested in the NASA Ames 11-Foot Transonic Unitary Plan Facility. The peak C(sub p,rms) value was under-predicted for the baseline, Mach 0.9 case, but the general trends of high C(sub p,rms) levels behind the forward attach hardware, reducing as one moves away both streamwise and circumferentially, were captured. Frequency of the peak power in power spectral density estimates was consistently under-predicted. Five alternate booster nose shapes were assessed, and several were shown to reduce the surface pressure fluctuations, both as predicted by the computations and verified by the wind tunnel results.
Using Internet search engines to estimate word frequency.
Blair, Irene V; Urland, Geoffrey R; Ma, Jennifer E
2002-05-01
The present research investigated Internet search engines as a rapid, cost-effective alternative for estimating word frequencies. Frequency estimates for 382 words were obtained and compared across four methods: (1) Internet search engines, (2) the Kucera and Francis (1967) analysis of a traditional linguistic corpus, (3) the CELEX English linguistic database (Baayen, Piepenbrock, & Gulikers, 1995), and (4) participant ratings of familiarity. The results showed that Internet search engines produced frequency estimates that were highly consistent with those reported by Kucera and Francis and those calculated from CELEX, highly consistent across search engines, and very reliable over a 6-month period of time. Additional results suggested that Internet search engines are an excellent option when traditional word frequency analyses do not contain the necessary data (e.g., estimates for forenames and slang). In contrast, participants' familiarity judgments did not correspond well with the more objective estimates of word frequency. Researchers are advised to use search engines with large databases (e.g., AltaVista) to ensure the greatest representativeness of the frequency estimates.
NASA Astrophysics Data System (ADS)
Guo, Jiang; Geng, Jianghui
2017-12-01
Significant time-varying inter-frequency clock biases (IFCBs) within GPS observations prevent the application of the legacy L1/L2 ionosphere-free clock products on L5 signals. Conventional approaches overcoming this problem are to estimate L1/L5 ionosphere-free clocks in addition to their L1/L2 counterparts or to compute IFCBs between the L1/L2 and L1/L5 clocks which are later modeled through a harmonic analysis. In contrast, we start from the undifferenced uncombined GNSS model and propose an alternative approach where a second satellite clock parameter dedicated to the L5 signals is estimated along with the legacy L1/L2 clock. In this manner, we do not need to rely on the correlated L1/L2 and L1/L5 ionosphere-free observables which complicates triple-frequency GPS stochastic models, or account for the unfavorable time-varying hardware biases in undifferenced GPS functional models since they can be absorbed by the L5 clocks. An extra advantage over the ionosphere-free model is that external ionosphere constraints can potentially be introduced to improve PPP. With 27 days of triple-frequency GPS data from globally distributed stations, we find that the RMS of the positioning differences between our GPS model and all conventional models is below 1 mm for all east, north and up components, demonstrating the effectiveness of our model in addressing triple-frequency observations and time-varying IFCBs. Moreover, we can combine the L1/L2 and L5 clocks derived from our model to calculate precisely the L1/L5 clocks which in practice only depart from their legacy counterparts by less than 0.006 ns in RMS. Our triple-frequency GPS model proves convenient and efficient in combating time-varying IFCBs and can be generalized to more than three frequency signals for satellite clock determination.
NASA Astrophysics Data System (ADS)
Qiu, Zhaoyang; Wang, Pei; Zhu, Jun; Tang, Bin
2016-12-01
Nyquist folding receiver (NYFR) is a novel ultra-wideband receiver architecture which can realize wideband receiving with a small amount of equipment. Linear frequency modulated/binary phase shift keying (LFM/BPSK) hybrid modulated signal is a novel kind of low probability interception signal with wide bandwidth. The NYFR is an effective architecture to intercept the LFM/BPSK signal and the LFM/BPSK signal intercepted by the NYFR will add the local oscillator modulation. A parameter estimation algorithm for the NYFR output signal is proposed. According to the NYFR prior information, the chirp singular value ratio spectrum is proposed to estimate the chirp rate. Then, based on the output self-characteristic, matching component function is designed to estimate Nyquist zone (NZ) index. Finally, matching code and subspace method are employed to estimate the phase change points and code length. Compared with the existing methods, the proposed algorithm has a better performance. It also has no need to construct a multi-channel structure, which means the computational complexity for the NZ index estimation is small. The simulation results demonstrate the efficacy of the proposed algorithm.
Octree-based Global Earthquake Simulations
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Juarez, A.; Bielak, J.; Salazar Monroy, E. F.
2017-12-01
Seismological research has motivated recent efforts to construct more accurate three-dimensional (3D) velocity models of the Earth, perform global simulations of wave propagation to validate models, and also to study the interaction of seismic fields with 3D structures. However, traditional methods for seismogram computation at global scales are limited by computational resources, relying primarily on traditional methods such as normal mode summation or two-dimensional numerical methods. We present an octree-based mesh finite element implementation to perform global earthquake simulations with 3D models using topography and bathymetry with a staircase approximation, as modeled by the Carnegie Mellon Finite Element Toolchain Hercules (Tu et al., 2006). To verify the implementation, we compared the synthetic seismograms computed in a spherical earth against waveforms calculated using normal mode summation for the Preliminary Earth Model (PREM) for a point source representation of the 2014 Mw 7.3 Papanoa, Mexico earthquake. We considered a 3 km-thick ocean layer for stations with predominantly oceanic paths. Eigen frequencies and eigen functions were computed for toroidal, radial, and spherical oscillations in the first 20 branches. Simulations are valid at frequencies up to 0.05 Hz. Matching among the waveforms computed by both approaches, especially for long period surface waves, is excellent. Additionally, we modeled the Mw 9.0 Tohoku-Oki earthquake using the USGS finite fault inversion. Topography and bathymetry from ETOPO1 are included in a mesh with more than 3 billion elements; constrained by the computational resources available. We compared estimated velocity and GPS synthetics against observations at regional and teleseismic stations of the Global Seismological Network and discuss the differences among observations and synthetics, revealing that heterogeneity, particularly in the crust, needs to be considered.
Efficient sensor network vehicle classification using peak harmonics of acoustic emissions
NASA Astrophysics Data System (ADS)
William, Peter E.; Hoffman, Michael W.
2008-04-01
An application is proposed for detection and classification of battlefield ground vehicles using the emitted acoustic signal captured at individual sensor nodes of an ad hoc Wireless Sensor Network (WSN). We make use of the harmonic characteristics of the acoustic emissions of battlefield vehicles, in reducing both the computations carried on the sensor node and the transmitted data to the fusion center for reliable and effcient classification of targets. Previous approaches focus on the lower frequency band of the acoustic emissions up to 500Hz; however, we show in the proposed application how effcient discrimination between battlefield vehicles is performed using features extracted from higher frequency bands (50 - 1500Hz). The application shows that selective time domain acoustic features surpass equivalent spectral features. Collaborative signal processing is utilized, such that estimation of certain signal model parameters is carried by the sensor node, in order to reduce the communication between the sensor node and the fusion center, while the remaining model parameters are estimated at the fusion center. The transmitted data from the sensor node to the fusion center ranges from 1 ~ 5% of the sampled acoustic signal at the node. A variety of classification schemes were examined, such as maximum likelihood, vector quantization and artificial neural networks. Evaluation of the proposed application, through processing of an acoustic data set with comparison to previous results, shows that the improvement is not only in the number of computations but also in the detection and false alarm rate as well.
A multimodal approach to estimating vigilance using EEG and forehead EOG
NASA Astrophysics Data System (ADS)
Zheng, Wei-Long; Lu, Bao-Liang
2017-04-01
Objective. Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals. Approach. The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. Main results. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. Significance. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.
Identification of site frequencies from building records
Celebi, M.
2003-01-01
A simple procedure to identify site frequencies using earthquake response records from roofs and basements of buildings is presented. For this purpose, data from five different buildings are analyzed using only spectral analyses techniques. Additional data such as free-field records in close proximity to the buildings and site characterization data are also used to estimate site frequencies and thereby to provide convincing evidence and confirmation of the site frequencies inferred from the building records. Furthermore, simple code-formula is used to calculate site frequencies and compare them with the identified site frequencies from records. Results show that the simple procedure is effective in identification of site frequencies and provides relatively reliable estimates of site frequencies when compared with other methods. Therefore the simple procedure for estimating site frequencies using earthquake records can be useful in adding to the database of site frequencies. Such databases can be used to better estimate site frequencies of those sites with similar geological structures.
Dalle Carbonare, S; Folli, F; Patrini, E; Giudici, P; Bellazzi, R
2013-01-01
The increasing demand of health care services and the complexity of health care delivery require Health Care Organizations (HCOs) to approach clinical risk management through proper methods and tools. An important aspect of risk management is to exploit the analysis of medical injuries compensation claims in order to reduce adverse events and, at the same time, to optimize the costs of health insurance policies. This work provides a probabilistic method to estimate the risk level of a HCO by computing quantitative risk indexes from medical injury compensation claims. Our method is based on the estimate of a loss probability distribution from compensation claims data through parametric and non-parametric modeling and Monte Carlo simulations. The loss distribution can be estimated both on the whole dataset and, thanks to the application of a Bayesian hierarchical model, on stratified data. The approach allows to quantitatively assessing the risk structure of the HCO by analyzing the loss distribution and deriving its expected value and percentiles. We applied the proposed method to 206 cases of injuries with compensation requests collected from 1999 to the first semester of 2007 by the HCO of Lodi, in the Northern part of Italy. We computed the risk indexes taking into account the different clinical departments and the different hospitals involved. The approach proved to be useful to understand the HCO risk structure in terms of frequency, severity, expected and unexpected loss related to adverse events.
Submillimeter, millimeter, and microwave spectral line catalogue, revision 3
NASA Technical Reports Server (NTRS)
Pickett, H. M.; Poynter, R. L.; Cohen, E. A.
1992-01-01
A computer-accessible catalog of submillimeter, millimeter, and microwave spectral lines in the frequency range between 0 and 10,000 GHz (i.e., wavelengths longer than 30 micrometers) is described. The catalog can be used as a planning or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, the lower state energy, and the quantum number assignment. This edition of the catalog has information on 206 atomic and molecular species and includes a total of 630,924 lines. The catalog was constructed by using theoretical least square fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalog will add more atoms and molecules and update the present listings as new data appear. The catalog is available as a magnetic data tape recorded in card images, with one card image per spectral line, from the National Space Science Data Center, located at Goddard Space Flight Center.
NASA Astrophysics Data System (ADS)
Petković, Dalibor; Shamshirband, Shahaboddin; Saboohi, Hadi; Ang, Tan Fong; Anuar, Nor Badrul; Rahman, Zulkanain Abdul; Pavlović, Nenad T.
2014-07-01
The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the polynomial and radial basis function (RBF) are applied as the kernel function of Support Vector Regression (SVR) to estimate and predict estimate MTF value of the actual optical system according to experimental tests. Instead of minimizing the observed training error, SVR_poly and SVR_rbf attempt to minimize the generalization error bound so as to achieve generalized performance. The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the SVR_rbf approach in compare to SVR_poly soft computing methodology.
Local Positioning System Using Flickering Infrared LEDs
Raharijaona, Thibaut; Mawonou, Rodolphe; Nguyen, Thanh Vu; Colonnier, Fabien; Boyron, Marc; Diperi, Julien; Viollet, Stéphane
2017-01-01
A minimalistic optical sensing device for the indoor localization is proposed to estimate the relative position between the sensor and active markers using amplitude modulated infrared light. The innovative insect-based sensor can measure azimuth and elevation angles with respect to two small and cheap active infrared light emitting diodes (LEDs) flickering at two different frequencies. In comparison to a previous lensless visual sensor that we proposed for proximal localization (less than 30 cm), we implemented: (i) a minimalistic sensor in terms of small size (10 cm3), light weight (6 g) and low power consumption (0.4 W); (ii) an Arduino-compatible demodulator for fast analog signal processing requiring low computational resources; and (iii) an indoor positioning system for a mobile robotic application. Our results confirmed that the proposed sensor was able to estimate the position at a distance of 2 m with an accuracy as small as 2-cm at a sampling frequency of 100 Hz. Our sensor can be also suitable to be implemented in a position feedback loop for indoor robotic applications in GPS-denied environment. PMID:29099743
NASA Technical Reports Server (NTRS)
Sreenivas, Kidambi; Whitfield, David L.
1995-01-01
Two linearized solvers (time and frequency domain) based on a high resolution numerical scheme are presented. The basic approach is to linearize the flux vector by expressing it as a sum of a mean and a perturbation. This allows the governing equations to be maintained in conservation law form. A key difference between the time and frequency domain computations is that the frequency domain computations require only one grid block irrespective of the interblade phase angle for which the flow is being computed. As a result of this and due to the fact that the governing equations for this case are steady, frequency domain computations are substantially faster than the corresponding time domain computations. The linearized equations are used to compute flows in turbomachinery blade rows (cascades) arising due to blade vibrations. Numerical solutions are compared to linear theory (where available) and to numerical solutions of the nonlinear Euler equations.
Radio-frequency measurement in semiconductor quantum computation
NASA Astrophysics Data System (ADS)
Han, TianYi; Chen, MingBo; Cao, Gang; Li, HaiOu; Xiao, Ming; Guo, GuoPing
2017-05-01
Semiconductor quantum dots have attracted wide interest for the potential realization of quantum computation. To realize efficient quantum computation, fast manipulation and the corresponding readout are necessary. In the past few decades, considerable progress of quantum manipulation has been achieved experimentally. To meet the requirements of high-speed readout, radio-frequency (RF) measurement has been developed in recent years, such as RF-QPC (radio-frequency quantum point contact) and RF-DGS (radio-frequency dispersive gate sensor). Here we specifically demonstrate the principle of the radio-frequency reflectometry, then review the development and applications of RF measurement, which provides a feasible way to achieve high-bandwidth readout in quantum coherent control and also enriches the methods to study these artificial mesoscopic quantum systems. Finally, we prospect the future usage of radio-frequency reflectometry in scaling-up of the quantum computing models.
An Investigation of Primary School Science Teachers' Use of Computer Applications
ERIC Educational Resources Information Center
Ocak, Mehmet Akif; Akdemir, Omur
2008-01-01
This study investigated the level and frequency of science teachers' use of computer applications as an instructional tool in the classroom. The manner and frequency of science teachers' use of computer, their perceptions about integration of computer applications, and other factors contributed to changes in their computer literacy are…
NASA Astrophysics Data System (ADS)
Saccorotti, G.; Nisii, V.; Del Pezzo, E.
2008-07-01
Long-Period (LP) and Very-Long-Period (VLP) signals are the most characteristic seismic signature of volcano dynamics, and provide important information about the physical processes occurring in magmatic and hydrothermal systems. These events are usually characterized by sharp spectral peaks, which may span several frequency decades, by emergent onsets, and by a lack of clear S-wave arrivals. These two latter features make both signal detection and location a challenging task. In this paper, we propose a processing procedure based on Continuous Wavelet Transform of multichannel, broad-band data to simultaneously solve the signal detection and location problems. Our method consists of two steps. First, we apply a frequency-dependent threshold to the estimates of the array-averaged WCO in order to locate the time-frequency regions spanned by coherent arrivals. For these data, we then use the time-series of the complex wavelet coefficients for deriving the elements of the spatial Cross-Spectral Matrix. From the eigenstructure of this matrix, we eventually estimate the kinematic signals' parameters using the MUltiple SIgnal Characterization (MUSIC) algorithm. The whole procedure greatly facilitates the detection and location of weak, broad-band signals, in turn avoiding the time-frequency resolution trade-off and frequency leakage effects which affect conventional covariance estimates based upon Windowed Fourier Transform. The method is applied to explosion signals recorded at Stromboli volcano by either a short-period, small aperture antenna, or a large-aperture, broad-band network. The LP (0.2 < T < 2s) components of the explosive signals are analysed using data from the small-aperture array and under the plane-wave assumption. In this manner, we obtain a precise time- and frequency-localization of the directional properties for waves impinging at the array. We then extend the wavefield decomposition method using a spherical wave front model, and analyse the VLP components (T > 2s) of the explosion recordings from the broad-band network. Source locations obtained this way are fully compatible with those retrieved from application of more traditional (and computationally expensive) time-domain techniques, such as the Radial Semblance method.
Shulkind, Gal; Nazarathy, Moshe
2012-12-17
We present an efficient method for system identification (nonlinear channel estimation) of third order nonlinear Volterra Series Transfer Function (VSTF) characterizing the four-wave-mixing nonlinear process over a coherent OFDM fiber link. Despite the seemingly large number of degrees of freedom in the VSTF (cubic in the number of frequency points) we identified a compressed VSTF representation which does not entail loss of information. Additional slightly lossy compression may be obtained by discarding very low power VSTF coefficients associated with regions of destructive interference in the FWM phased array effect. Based on this two-staged VSTF compressed representation, we develop a robust and efficient algorithm of nonlinear system identification (optical performance monitoring) estimating the VSTF by transmission of an extended training sequence over the OFDM link, performing just a matrix-vector multiplication at the receiver by a pseudo-inverse matrix which is pre-evaluated offline. For 512 (1024) frequency samples per channel, the VSTF measurement takes less than 1 (10) msec to complete with computational complexity of one real-valued multiply-add operation per time sample. Relative to a naïve exhaustive three-tone-test, our algorithm is far more tolerant of ASE additive noise and its acquisition time is orders of magnitude faster.
Jing, Fulong; Jiao, Shuhong; Hou, Changbo; Si, Weijian; Wang, Yu
2017-06-21
For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM) signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR) and the quadratic chirp rate (QCR) are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF) and modified scaled Fourier transform (mSFT), an effective parameter estimation algorithm is proposed-referred to as the Two-Dimensional product modified Lv's distribution (2D-PMLVD)-for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT) and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.
Dynamic Granger-Geweke causality modeling with application to interictal spike propagation
Lin, Fa-Hsuan; Hara, Keiko; Solo, Victor; Vangel, Mark; Belliveau, John W.; Stufflebeam, Steven M.; Hamalainen, Matti S.
2010-01-01
A persistent problem in developing plausible neurophysiological models of perception, cognition, and action is the difficulty of characterizing the interactions between different neural systems. Previous studies have approached this problem by estimating causal influences across brain areas activated during cognitive processing using Structural Equation Modeling and, more recently, with Granger-Geweke causality. While SEM is complicated by the need for a priori directional connectivity information, the temporal resolution of dynamic Granger-Geweke estimates is limited because the underlying autoregressive (AR) models assume stationarity over the period of analysis. We have developed a novel optimal method for obtaining data-driven directional causality estimates with high temporal resolution in both time and frequency domains. This is achieved by simultaneously optimizing the length of the analysis window and the chosen AR model order using the SURE criterion. Dynamic Granger-Geweke causality in time and frequency domains is subsequently calculated within a moving analysis window. We tested our algorithm by calculating the Granger-Geweke causality of epileptic spike propagation from the right frontal lobe to the left frontal lobe. The results quantitatively suggested the epileptic activity at the left frontal lobe was propagated from the right frontal lobe, in agreement with the clinical diagnosis. Our novel computational tool can be used to help elucidate complex directional interactions in the human brain. PMID:19378280
NASA Astrophysics Data System (ADS)
Marksteiner, Quinn R.; Treiman, Michael B.; Chen, Ching-Fong; Haynes, William B.; Reiten, M. T.; Dalmas, Dale; Pulliam, Elias
2017-06-01
A resonant cavity method is presented which can measure loss tangents and dielectric constants for materials with dielectric constant from 150 to 10 000 and above. This practical and accurate technique is demonstrated by measuring barium strontium zirconium titanate bulk ferroelectric ceramic blocks. Above the Curie temperature, in the paraelectric state, barium strontium zirconium titanate has a sufficiently low loss that a series of resonant modes are supported in the cavity. At each mode frequency, the dielectric constant and loss tangent are obtained. The results are consistent with low frequency measurements and computer simulations. A quick method of analyzing the raw data using the 2D static electromagnetic modeling code SuperFish and an estimate of uncertainties are presented.
Kline, Jeffrey A; Courtney, D Mark; Than, Martin P; Hogg, Kerstin; Miller, Chadwick D; Johnson, Charles L; Smithline, Howard A
2010-02-01
Attribute matching matches an explicit clinical profile of a patient to a reference database to estimate the numeric value for the pretest probability of an acute disease. The authors tested the accuracy of this method for forecasting a very low probability of venous thromboembolism (VTE) in symptomatic emergency department (ED) patients. The authors performed a secondary analysis of five data sets from 15 hospitals in three countries. All patients had data collected at the time of clinical evaluation for suspected pulmonary embolism (PE). The criterion standard to exclude VTE required no evidence of PE or deep venous thrombosis (DVT) within 45 days of enrollment. To estimate pretest probabilities, a computer program selected, from a large reference database of patients previously evaluated for PE, patients who matched 10 predictor variables recorded for each current test patient. The authors compared the outcome frequency of having VTE [VTE(+)] in patients with a pretest probability estimate of <2.5% by attribute matching, compared with a value of 0 from the Wells score. The five data sets included 10,734 patients, and 747 (7.0%, 95% confidence interval [CI] = 6.5% to 7.5%) were VTE(+) within 45 days. The pretest probability estimate for PE was <2.5% in 2,975 of 10,734 (27.7%) patients, and within this subset, the observed frequency of VTE(+) was 48 of 2,975 (1.6%, 95% CI = 1.2% to 2.1%). The lowest possible Wells score (0) was observed in 3,412 (31.7%) patients, and within this subset, the observed frequency of VTE(+) was 79 of 3,412 (2.3%, 95% CI = 1.8% to 2.9%) patients. Attribute matching categorizes over one-quarter of patients tested for PE as having a pretest probability of <2.5%, and the observed rate of VTE within 45 days in this subset was <2.5%. (c) 2010 by the Society for Academic Emergency Medicine.
NASA Astrophysics Data System (ADS)
Majstorovic, J.; Rosat, S.; Lambotte, S.; Rogister, Y. J. G.
2017-12-01
Although there are numerous studies about 3D density Earth model, building an accurate one is still an engaging challenge. One procedure to refine global 3D Earth density models is based on unambiguous measurements of Earth's normal mode eigenfrequencies. To have unbiased eigenfrequency measurements one needs to deal with a variety of time records quality and especially different noise sources, while standard approaches usually include signal processing methods such as Fourier transform. Here we present estimate of complex eigenfrequencies and structure coefficients for several modes below 1 mHz (0S2, 2S1, etc.). Our analysis is performed in three steps. The first step includes the use of stacking methods to enhance specific modes of interest above the observed noise level. Out of three trials the optimal sequence estimation turned out to be the foremost compared to the spherical harmonic stacking method and receiver strip method. In the second step we apply an autoregressive method in the frequency domain to estimate complex eigenfrequencies of target modes. In the third step we apply the phasor walkout method to test and confirm our eigenfrequencies. Before conducting an analysis of time records, we evaluate how the station distribution and noise levels impact the estimate of eigenfrequencies and structure coefficients by using synthetic seismograms calculated for a 3D realistic Earth model, which includes Earth's ellipticity and lateral heterogeneity. Synthetic seismograms are computed by means of normal mode summation using self-coupling and cross-coupling of modes up to 1 mHz. Eventually, the methods tested on synthetic data are applied to long-period seismometer and superconducting gravimeter data recorded after six mega-earthquakes of magnitude greater than 8.3. Hence, we propose new estimates of structure coefficients dependent on the density variations.
NASA Astrophysics Data System (ADS)
Hwang, Sunghwan
1997-08-01
One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.
Low-flow characteristics for selected streams in Indiana
Fowler, Kathleen K.; Wilson, John T.
2015-01-01
The management and availability of Indiana’s water resources increase in importance every year. Specifically, information on low-flow characteristics of streams is essential to State water-management agencies. These agencies need low-flow information when working with issues related to irrigation, municipal and industrial water supplies, fish and wildlife protection, and the dilution of waste. Industrial, municipal, and other facilities must obtain National Pollutant Discharge Elimination System (NPDES) permits if their discharges go directly to surface waters. The Indiana Department of Environmental Management (IDEM) requires low-flow statistics in order to administer the NPDES permit program. Low-flow-frequency characteristics were computed for 272 continuous-record stations. The information includes low-flow-frequency analysis, flow-duration analysis, and harmonic mean for the continuous-record stations. For those stations affected by some form of regulation, low-flow frequency curves are based on the longest period of homogeneous record under current conditions. Low-flow-frequency values and harmonic mean flow (if sufficient data were available) were estimated for the 166 partial-record stations. Partial-record stations are ungaged sites where streamflow measurements were made at base flow.
Underwater sound radiation patterns of contemporary merchant ships
NASA Astrophysics Data System (ADS)
Gassmann, M.; Wiggins, S. M.; Hildebrand, J. A.
2016-12-01
Merchant ships radiate underwater sound as an unintended by-product of their operation and as consequence contribute significantly to low-frequency, man-made noise in the ocean. Current measurement standards for the description of underwater sound from ships (ISO 17208-1:2016 and ANSI S12.64-2009) require nominal hydrophone depths of 15°, 30° and 45° at the starboard and portside of the test vessel.To opportunistically study the underwater sound of contemporary merchant ships that were tracked by the Automatic Identification System (AIS), an array of seven high-frequency acoustic recording packages (HARPs) with a sampling frequency of 200 kHz was deployed in the Santa Barbara Channel in the primary outgoing shipping lane for the port of Los Angeles and Long Beach. The vertical and horizontal aperture of the array allowed for starboard and portside measurements at all standard-required nominal hydrophone depths in addition to measurements taken at the keel aspect. Based on these measurements, frequency-dependent radiation patterns of contemporary merchant ships were estimated and used to evaluate current standards for computing ship source levels.
Qualitative analysis of MTEM response using instantaneous attributes
NASA Astrophysics Data System (ADS)
Fayemi, Olalekan; Di, Qingyun
2017-11-01
This paper introduces new technique for qualitative analysis of multi-transient electromagnetic (MTEM) earth impulse response over complex geological structures. Instantaneous phase and frequency attributes were used in place of the conventional common offset section for improved qualitative interpretation of MTEM data by obtaining more detailed information from the earth impulse response. The instantaneous attributes were used to describe the lateral variation in subsurface resistivity and the visible geological structure with respect to given offsets. Instantaneous phase attribute was obtained by converting the impulse response into a complex form using the Hilbert transform. Conversely, the polynomial phase difference (PPD) estimator was favored over the center finite difference (CFD) approximation method in calculating the instantaneous frequency attribute because it is computationally efficient and has the ability to give a smooth variation of the instantaneous frequency over a common offset section. The observed results from the instantaneous attributes were in good agreement with both the subsurface model used and the apparent resistivity section obtained from the MTEM earth impulse response. Hence, this study confirms the capability of both instantaneous phase and frequency attributes as highly effective tools for MTEM qualitative analysis.
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.
Interaction of vortices with flexible piezoelectric beams
NASA Astrophysics Data System (ADS)
Goushcha, Oleg; Akaydin, Huseyin Dogus; Elvin, Niell; Andreopoulos, Yiannis
2012-11-01
A cantilever piezoelectric beam immersed in a flow is used to harvest fluidic energy. Pressure distribution induced by naturally present vortices in a turbulent fluid flow can force the beam to oscillate producing electrical output. Maximizing the power output of such an electromechanical fluidic system is a challenge. In order to understand the behavior of the beam in a fluid flow where vortices of different scales are present, an experimental facility was set up to study the interaction of individual vortices with the beam. In our set up, vortex rings produced by an audio speaker travel at specific distances from the beam or impinge on it, with a frequency varied up to the natural frequency of the beam. Depending on this frequency both constructive and destructive interactions between the vortices and the beam are observed. Vortices traveling over the beam with a frequency multiple of the natural frequency of the beam cause the beam to resonate and larger deflection amplitudes are observed compared to excitation from a single vortex. PIV is used to compute the flow field and circulation of each vortex and estimate the effect of pressure distribution on the beam deflection. Sponsored by NSF Grant: CBET #1033117.
Haplotype diversity in 11 candidate genes across four populations.
Beaty, T H; Fallin, M D; Hetmanski, J B; McIntosh, I; Chong, S S; Ingersoll, R; Sheng, X; Chakraborty, R; Scott, A F
2005-09-01
Analysis of haplotypes based on multiple single-nucleotide polymorphisms (SNP) is becoming common for both candidate gene and fine-mapping studies. Before embarking on studies of haplotypes from genetically distinct populations, however, it is important to consider variation both in linkage disequilibrium (LD) and in haplotype frequencies within and across populations, as both vary. Such diversity will influence the choice of "tagging" SNPs for candidate gene or whole-genome association studies because some markers will not be polymorphic in all samples and some haplotypes will be poorly represented or completely absent. Here we analyze 11 genes, originally chosen as candidate genes for oral clefts, where multiple markers were genotyped on individuals from four populations. Estimated haplotype frequencies, measures of pairwise LD, and genetic diversity were computed for 135 European-Americans, 57 Chinese-Singaporeans, 45 Malay-Singaporeans, and 46 Indian-Singaporeans. Patterns of pairwise LD were compared across these four populations and haplotype frequencies were used to assess genetic variation. Although these populations are fairly similar in allele frequencies and overall patterns of LD, both haplotype frequencies and genetic diversity varied significantly across populations. Such haplotype diversity has implications for designing studies of association involving samples from genetically distinct populations.
NASA Astrophysics Data System (ADS)
Cai, Jianhua
2017-05-01
The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.
Asymptotic inference in system identification for the atom maser.
Catana, Catalin; van Horssen, Merlijn; Guta, Madalin
2012-11-28
System identification is closely related to control theory and plays an increasing role in quantum engineering. In the quantum set-up, system identification is usually equated to process tomography, i.e. estimating a channel by probing it repeatedly with different input states. However, for quantum dynamical systems such as quantum Markov processes, it is more natural to consider the estimation based on continuous measurements of the output, with a given input that may be stationary. We address this problem using asymptotic statistics tools, for the specific example of estimating the Rabi frequency of an atom maser. We compute the Fisher information of different measurement processes as well as the quantum Fisher information of the atom maser, and establish the local asymptotic normality of these statistical models. The statistical notions can be expressed in terms of spectral properties of certain deformed Markov generators, and the connection to large deviations is briefly discussed.
MNE software for processing MEG and EEG data
Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.
2013-01-01
Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808
Depth-Duration Frequency of Precipitation for Oklahoma
Tortorelli, Robert L.; Rea, Alan; Asquith, William H.
1999-01-01
A regional frequency analysis was conducted to estimate the depth-duration frequency of precipitation for 12 durations in Oklahoma (15, 30, and 60 minutes; 1, 2, 3, 6, 12, and 24 hours; and 1, 3, and 7 days). Seven selected frequencies, expressed as recurrence intervals, were investigated (2, 5, 10, 25, 50, 100, and 500 years). L-moment statistics were used to summarize depth-duration data and to determine the appropriate statistical distributions. Three different rain-gage networks provided the data (15minute, 1-hour, and 1-day). The 60-minute, and 1-hour; and the 24-hour, and 1-day durations were analyzed separately. Data were used from rain-gage stations with at least 10-years of record and within Oklahoma or about 50 kilometers into bordering states. Precipitation annual maxima (depths) were determined from the data for 110 15-minute, 141 hourly, and 413 daily stations. The L-moment statistics for depths for all durations were calculated for each station using unbiased L-mo-ment estimators for the mean, L-scale, L-coefficient of variation, L-skew, and L-kur-tosis. The relation between L-skew and L-kurtosis (L-moment ratio diagram) and goodness-of-fit measures were used to select the frequency distributions. The three-parameter generalized logistic distribution was selected to model the frequencies of 15-, 30-, and 60-minute annual maxima; and the three-parameter generalized extreme-value distribution was selected to model the frequencies of 1-hour to 7-day annual maxima. The mean for each station and duration was corrected for the bias associated with fixed interval recording of precipitation amounts. The L-scale and spatially averaged L-skew statistics were used to compute the location, scale, and shape parameters of the selected distribution for each station and duration. The three parameters were used to calculate the depth-duration-frequency relations for each station. The precipitation depths for selected frequencies were contoured from weighted depth surfaces to produce maps from which the precipitation depth-duration-frequency curve for selected storm durations can be determined for any site in Oklahoma.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
A Streamflow Statistics (StreamStats) Web Application for Ohio
Koltun, G.F.; Kula, Stephanie P.; Puskas, Barry M.
2006-01-01
A StreamStats Web application was developed for Ohio that implements equations for estimating a variety of streamflow statistics including the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year peak streamflows, mean annual streamflow, mean monthly streamflows, harmonic mean streamflow, and 25th-, 50th-, and 75th-percentile streamflows. StreamStats is a Web-based geographic information system application designed to facilitate the estimation of streamflow statistics at ungaged locations on streams. StreamStats can also serve precomputed streamflow statistics determined from streamflow-gaging station data. The basic structure, use, and limitations of StreamStats are described in this report. To facilitate the level of automation required for Ohio's StreamStats application, the technique used by Koltun (2003)1 for computing main-channel slope was replaced with a new computationally robust technique. The new channel-slope characteristic, referred to as SL10-85, differed from the National Hydrography Data based channel slope values (SL) reported by Koltun (2003)1 by an average of -28.3 percent, with the median change being -13.2 percent. In spite of the differences, the two slope measures are strongly correlated. The change in channel slope values resulting from the change in computational method necessitated revision of the full-model equations for flood-peak discharges originally presented by Koltun (2003)1. Average standard errors of prediction for the revised full-model equations presented in this report increased by a small amount over those reported by Koltun (2003)1, with increases ranging from 0.7 to 0.9 percent. Mean percentage changes in the revised regression and weighted flood-frequency estimates relative to regression and weighted estimates reported by Koltun (2003)1 were small, ranging from -0.72 to -0.25 percent and -0.22 to 0.07 percent, respectively.
Hellwig, B
2000-02-01
This study provides a detailed quantitative estimate for local synaptic connectivity between neocortical pyramidal neurons. A new way of obtaining such an estimate is presented. In acute slices of the rat visual cortex, four layer 2 and four layer 3 pyramidal neurons were intracellularly injected with biocytin. Axonal and dendritic arborizations were three-dimensionally reconstructed with the aid of a computer-based camera lucida system. In a computer experiment, pairs of pre- and postsynaptic neurons were formed and potential synaptic contacts were calculated. For each pair, the calculations were carried out for a whole range of distances (0 to 500 microm) between the presynaptic and the postsynaptic neuron, in order to estimate cortical connectivity as a function of the spatial separation of neurons. It was also differentiated whether neurons were situated in the same or in different cortical layers. The data thus obtained was used to compute connection probabilities, the average number of contacts between neurons, the frequency of specific numbers of contacts and the total number of contacts a dendritic tree receives from the surrounding cortical volume. Connection probabilities ranged from 50% to 80% for directly adjacent neurons and from 0% to 15% for neurons 500 microm apart. In many cases, connections were mediated by one contact only. However, close neighbors made on average up to 3 contacts with each other. The question as to whether the method employed in this study yields a realistic estimate of synaptic connectivity is discussed. It is argued that the results can be used as a detailed blueprint for building artificial neural networks with a cortex-like architecture.
NASA Technical Reports Server (NTRS)
Stocks, Dana R.
1986-01-01
The Dynamic Gas Temperature Measurement System compensation software accepts digitized data from two different diameter thermocouples and computes a compensated frequency response spectrum for one of the thermocouples. Detailed discussions of the physical system, analytical model, and computer software are presented in this volume and in Volume 1 of this report under Task 3. Computer program software restrictions and test cases are also presented. Compensated and uncompensated data may be presented in either the time or frequency domain. Time domain data are presented as instantaneous temperature vs time. Frequency domain data may be presented in several forms such as power spectral density vs frequency.
Frequency Estimator Performance for a Software-Based Beacon Receiver
NASA Technical Reports Server (NTRS)
Zemba, Michael J.; Morse, Jacquelynne Rose; Nessel, James A.; Miranda, Felix
2014-01-01
As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a QV-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.
A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.
Suk, Heung-Il; Lee, Seong-Whan
2013-02-01
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
Twagirayezu, Sylvestre; Cich, Matthew J.; Sears, Trevor J.; ...
2015-07-14
Doppler-free transition frequencies for v₄₋ and v₅₋excited hot bands have been measured in the v₁ + v₃ band region of the spectrum of acetylene using saturation dip spectroscopy with an extended cavity diode laser referenced to a frequency comb. The frequency accuracy of the measured transitions, as judged from line shape model fits and comparison to known frequencies in the v₁ + v₃ band itself, is between 3 and 22 kHz. This is some three orders of magnitude improvement on the accuracy and precision of previous line position estimates that were derived from the analysis of high-resolution Fourier transform infraredmore » absorption spectra. Comparison to transition frequencies computed from constants derived from published Fourier transform infrared spectra shows that some upper rotational energy levels suffer specific perturbations causing energy level shifts of up to several hundred MHz. These perturbations are due to energy levels of the same rotational quantum number derived from nearby vibrational levels that become degenerate at specific energies. Future identification of the perturbing levels will provide accurate relative energies of excited vibrational levels of acetylene in the 7100–7600 cm⁻¹ energy region.« less
The effect of the inner-hair-cell mediated transduction on the shape of neural tuning curves
NASA Astrophysics Data System (ADS)
Altoè, Alessandro; Pulkki, Ville; Verhulst, Sarah
2018-05-01
The inner hair cells of the mammalian cochlea transform the vibrations of their stereocilia into releases of neurotransmitter at the ribbon synapses, thereby controlling the activity of the afferent auditory fibers. The mechanical-to-neural transduction is a highly nonlinear process and it introduces differences between the frequency-tuning of the stereocilia and that of the afferent fibers. Using a computational model of the inner hair cell that is based on in vitro data, we estimated that smaller vibrations of the stereocilia are necessary to drive the afferent fibers above threshold at low (≤0.5 kHz) than at high (≥4 kHz) driving frequencies. In the base of the cochlea, the transduction process affects the low-frequency tails of neural tuning curves. In particular, it introduces differences between the frequency-tuning of the stereocilia and that of the auditory fibers resembling those between basilar membrane velocity and auditory fibers tuning curves in the chinchilla base. For units with a characteristic frequency between 1 and 4 kHz, the transduction process yields shallower neural than stereocilia tuning curves as the characteristic frequency decreases. This study proposes that transduction contributes to the progressive broadening of neural tuning curves from the base to the apex.
PROPERTIES OF PHANTOM TISSUE-LIKE POLYMETHYLPENTENE IN THE FREQUENCY RANGE 20–70 MHZ
Madsen, Ernest L; Deaner, Meagan E; Mehi, James
2011-01-01
Quantitative ultrasound (QUS) has been employed to characterize soft tissues at ordinary abdominal ultrasound frequencies (2–15 MHz) and is beginning application at high frequencies (20–70 MHz). For example, backscatter and attenuation coefficients can be estimated in vivo using a reference phantom. At high frequencies it is crucial that reverberations do not compromise the measurements. Such reverberations can occur between the phantom's scanning window and transducer components as well as within the scanning window between its surfaces. Transducers are designed to minimize reverberations between the transducer and soft tissue. Thus, the acoustic impedance of a phantom scanning window should be tissue-like; polymethylpentene (TPX) is commonly used because of its tissue-like acoustic impedance. For QUS it is also crucial to correct for the transmission coefficient of the scanning window. Computation of the latter requires knowledge of the ultrasonic properties, viz, density, speed and attenuation coefficients. This work reports values for the ultrasonic properties of two versions of TPX over the high frequency range. One form (TPX film) is used as a scanning window on high frequency phantoms, and at 40 MHz and 22°C was found to have an attenuation coefficient of 120 dB/cm and a propagation speed of 2093 m/s. PMID:21723451
On the estimation of wall pressure coherence using time-resolved tomographic PIV
NASA Astrophysics Data System (ADS)
Pröbsting, Stefan; Scarano, Fulvio; Bernardini, Matteo; Pirozzoli, Sergio
2013-07-01
Three-dimensional time-resolved velocity field measurements are obtained using a high-speed tomographic Particle Image Velocimetry (PIV) system on a fully developed flat plate turbulent boundary layer for the estimation of wall pressure fluctuations. The work focuses on the applicability of tomographic PIV to compute the coherence of pressure fluctuations, with attention to the estimation of the stream and spanwise coherence length. The latter is required for estimations of aeroacoustic noise radiation by boundary layers and trailing edge flows, but is also of interest for vibro-structural problems. The pressure field is obtained by solving the Poisson equation for incompressible flows, where the source terms are provided by time-resolved velocity field measurements. Measured 3D velocity data is compared to results obtained from planar PIV, and a Direct Numerical Simulation (DNS) at similar Reynolds number. An improved method for the estimation of the material based on a least squares estimator of the velocity derivative along a particle trajectory is proposed and applied. Computed surface pressure fluctuations are further verified by means of simultaneous measurements by a pinhole microphone and compared to the DNS results and a semi-empirical model available from literature. The correlation coefficient for the reconstructed pressure time series with respect to pinhole microphone measurements attains approximately 0.5 for the band-pass filtered signal over the range of frequencies resolved by the velocity field measurements. Scaled power spectra of the pressure at a single point compare favorably to the DNS results and those available from literature. Finally, the coherence of surface pressure fluctuations and the resulting span- and streamwise coherence lengths are estimated and compared to semi-empirical models and DNS results.
1991-05-31
Corporation High Precision Nonlinear Computer Modelling Technique for Quartz Crystal Oscillators ............... 341 R. Brendel, F. Djian, CNRS & E. Robert...34) A.1.5% IV.1 Results of the computations for resonators having circular electrodes. The model was applied to compute the resonances 0f-.I frequencies...having circular electrodes. *- I The model was applied to compute the resonances frequencies of the fundamental mode and of its anharmonics ,odel and
Statistical analysis of multivariate atmospheric variables. [cloud cover
NASA Technical Reports Server (NTRS)
Tubbs, J. D.
1979-01-01
Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.
A Model for the Development of an Organization’s Information System (IS) Security System
1986-12-01
INTRODUCTION — = 52 B. A RISK ASSESSMENT 52 1. Background 52 2. Threat Identification -— — 53 3. Impact Analysis 54 C. LOGICAL DESIGN • — 59 D. PRACTICAL DESIGN...OF ESTIMATED IMPACT AND FREQUENCY • 93 APPENDIX H: COMBINED MATRIX OF 1, F, AND ALE 9 4 APPENDIX I: SECURITY RESOURCES (CONTROLS) 9 5 APPENDIX J...that have been developed, the computer’s impact is sometimes hard to discern. Except in recent years, with the increasing use of microcomputers, the
Streamflow model of Wisconsin River for estimating flood frequency and volume
Krug, William R.; House, Leo B.
1980-01-01
The 100-year flood peak at Wisconsin Dells, computed from the simulated, regulated streamflow data for the period 1915-76, is 82,000 cubic feet per second, including the effects of all the reservoirs in the river system, as they are currently operated. It also includes the effects of Lakes Du Bay, Petenwell, and Castle Rock which are significant for spring floods but are insignificant for summer or fall floods because they are normally maintained nearly full in the summer and fall and have very little storage for floodwaters. (USGS)
Distributed processing of a GPS receiver network for a regional ionosphere map
NASA Astrophysics Data System (ADS)
Choi, Kwang Ho; Hoo Lim, Joon; Yoo, Won Jae; Lee, Hyung Keun
2018-01-01
This paper proposes a distributed processing method applicable to GPS receivers in a network to generate a regional ionosphere map accurately and reliably. For accuracy, the proposed method is operated by multiple local Kalman filters and Kriging estimators. Each local Kalman filter is applied to a dual-frequency receiver to estimate the receiver’s differential code bias and vertical ionospheric delays (VIDs) at different ionospheric pierce points. The Kriging estimator selects and combines several VID estimates provided by the local Kalman filters to generate the VID estimate at each ionospheric grid point. For reliability, the proposed method uses receiver fault detectors and satellite fault detectors. Each receiver fault detector compares the VID estimates of the same local area provided by different local Kalman filters. Each satellite fault detector compares the VID estimate of each local area with that projected from the other local areas. Compared with the traditional centralized processing method, the proposed method is advantageous in that it considerably reduces the computational burden of each single Kalman filter and enables flexible fault detection, isolation, and reconfiguration capability. To evaluate the performance of the proposed method, several experiments with field collected measurements were performed.
ERIC Educational Resources Information Center
Tsao, Yea-Ling; Pan, Ting-Rung
2011-01-01
Main purpose of this study is to investigate what level of computational estimation performance is possessed by fifth graders and explore computational estimation attitude towards fifth graders. Two hundred and thirty-five Grade-5 students from four elementary schools in Taipei City were selected for "Computational Estimation Test" and…
NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States
NASA Astrophysics Data System (ADS)
Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.
2013-12-01
NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.
NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States
NASA Astrophysics Data System (ADS)
Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.
2011-12-01
NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.
NASA Technical Reports Server (NTRS)
1975-01-01
The trajectory simulation mode (SIMSEP) requires the namelist SIMSEP to follow TRAJ. The SIMSEP contains parameters which describe the scope of the simulation, expected dynamic errors, and cumulative statistics from previous SIMSEP runs. Following SIMSEP are a set of GUID namelists, one for each guidance correction maneuver. The GUID describes the strategy, knowledge or estimation uncertainties and cumulative statistics for that particular maneuver. The trajectory display mode (REFSEP) requires only the namelist TRAJ followed by scheduling cards, similar to those used in GODSEP. The fixed field schedule cards define: types of data displayed, span of interest, and frequency of printout. For those users who can vary the amount of blank common storage in their runs, a guideline to estimate the total MAPSEP core requirements is given. Blank common length is related directly to the dimension of the dynamic state (NDIM) used in transition matrix (STM) computation, and, the total augmented (knowledge) state (NAUG). The values of program and blank common must be added to compute the total decimal core for a CDC 6500. Other operating systems must scale these requirements appropriately.
Tornado climatology of the contiguous United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsdell, J.V.; Andrews, G.L.
1986-05-01
The characteristics of tornadoes that were reported in the contiguous United States for the period from January 1, 1954, through December 31, 1983, have been computed from data in the National Severe Storms Forecast Center tornado data base. The characteristics summarized in this report include frequency and locations of tornadoes, and their lengths, widths, and areas. Tornado strike and intensity probabilities have been estimated on a regional basis, and these estimates have been used to compute wind speeds with 10/sup -5/, 10/sup -6/, and 10/sup -7/ yr/sup -1/ probabilities of occurrence. The 10/sup -7/ yr/sup -1/ wind speeds range frommore » below 200 mph in the western United States to about 330 mph in the vicinity of Kansas and Nebraska. The appendices contain extensive tabulations of tornado statistics. Variations of the characteristics within the contiguous United States are presented in the summaries. Separate tabulations are provided for the contiguous United States, for each state, for each 5/sup 0/ and 1/sup 0/ latitude and longitude box, and for the eastern and western United States.« less
Earth-Space Link Attenuation Estimation via Ground Radar Kdp
NASA Technical Reports Server (NTRS)
Bolen, Steven M.; Benjamin, Andrew L.; Chandrasekar, V.
2003-01-01
A method of predicting attenuation on microwave Earth/spacecraft communication links, over wide areas and under various atmospheric conditions, has been developed. In the area around the ground station locations, a nearly horizontally aimed polarimetric S-band ground radar measures the specific differential phase (Kdp) along the Earth-space path. The specific attenuation along a path of interest is then computed by use of a theoretical model of the relationship between the measured S-band specific differential phase and the specific attenuation at the frequency to be used on the communication link. The model includes effects of rain, wet ice, and other forms of precipitation. The attenuation on the path of interest is then computed by integrating the specific attenuation over the length of the path. This method can be used to determine statistics of signal degradation on Earth/spacecraft communication links. It can also be used to obtain real-time estimates of attenuation along multiple Earth/spacecraft links that are parts of a communication network operating within the radar coverage area, thereby enabling better management of the network through appropriate dynamic routing along the best combination of links.
Cost-effectiveness of breast cancer screening policies using simulation.
Gocgun, Y; Banjevic, D; Taghipour, S; Montgomery, N; Harvey, B J; Jardine, A K S; Miller, A B
2015-08-01
In this paper, we study breast cancer screening policies using computer simulation. We developed a multi-state Markov model for breast cancer progression, considering both the screening and treatment stages of breast cancer. The parameters of our model were estimated through data from the Canadian National Breast Cancer Screening Study as well as data in the relevant literature. Using computer simulation, we evaluated various screening policies to study the impact of mammography screening for age-based subpopulations in Canada. We also performed sensitivity analysis to examine the impact of certain parameters on number of deaths and total costs. The analysis comparing screening policies reveals that a policy in which women belonging to the 40-49 age group are not screened, whereas those belonging to the 50-59 and 60-69 age groups are screened once every 5 years, outperforms others with respect to cost per life saved. Our analysis also indicates that increasing the screening frequencies for the 50-59 and 60-69 age groups decrease mortality, and that the average number of deaths generally decreases with an increase in screening frequency. We found that screening annually for all age groups is associated with the highest costs per life saved. Our analysis thus reveals that cost per life saved increases with an increase in screening frequency. Copyright © 2015 Elsevier Ltd. All rights reserved.
Norris, Laura C; Fornadel, Christen M; Hung, Wei-Chien; Pineda, Fernando J; Norris, Douglas E
2010-07-01
Anopheles arabiensis is a major vector of Plasmodium falciparum in southern Zambia. This study aimed to determine the rate of multiple human blood meals taken by An. arabiensis to more accurately estimate entomologic inoculation rates (EIRs). Mosquitoes were collected in four village areas over two seasons. DNA from human blood meals was extracted and amplified at four microsatellite loci. Using the three-allele method, which counts > or = 3 alleles at any microsatellite locus as a multiple blood meal, we determined that the overall frequency of multiple blood meals was 18.9%, which was higher than rates reported for An. gambiae in Kenya and An. funestus in Tanzania. Computer simulations showed that the three-allele method underestimates the true multiple blood meal proportion by 3-5%. Although P. falciparum infection status was not shown to influence the frequency of multiple blood feeding, the high multiple feeding rate found in this study increased predicted malaria risk by increasing EIR.
Ringing Artefact Reduction By An Efficient Likelihood Improvement Method
NASA Astrophysics Data System (ADS)
Fuderer, Miha
1989-10-01
In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..
High sensitivity pressure transducer based on the phase characteristics of GMI magnetic sensors
NASA Astrophysics Data System (ADS)
Benavides, L. S.; Costa Silva, E.; Costa Monteiro, E.; Hall Barbosa, C. R.
2018-03-01
This paper presents a new configuration for a GMI pressure transducer based on the reading of the phase characteristics of GMI sensor, intended for biomedical applications. The development process of this new class of magnetic field transducers is discussed, beginning with the definition of the ideal conditioning of the GMI sensor elements (dc level and frequency of the excitation current and sample length) and continuing with computational simulations of the full electronic circuit performed using the experimental data obtained from measured GMI curves, and have shown that the improvement in the sensitivity of GMI magnetometers is larger when phase-based transducers are used instead of magnitude-based transducers. Parameters of interest of the developed prototype are thoroughly analyzed, such as: sensitivity, linearity and frequency response. Also, the spectral noise density of the developed pressure transducer is evaluated and its resolution in the passband is estimated. A low-cost GMI pressure transducer was developed, presenting high resolution, high sensitivity and a frequency bandwidth compatible to the desired biomedical applications.
A case study of alternative site response explanatory variables in Parkfield, California
Thompson, E.M.; Baise, L.G.; Kayen, R.E.; Morgan, E.C.; Kaklamanos, J.
2011-01-01
The combination of densely-spaced strong-motion stations in Parkfield, California, and spectral analysis of surface waves (SASW) profiles provides an ideal dataset for assessing the accuracy of different site response explanatory variables. We judge accuracy in terms of spatial coverage and correlation with observations. The performance of the alternative models is period-dependent, but generally we observe that: (1) where a profile is available, the square-root-of-impedance method outperforms VS30 (average S-wave velocity to 30 m depth), and (2) where a profile is unavailable, the topographic-slope method outperforms surficial geology. The fundamental site frequency is a valuable site response explanatory variable, though less valuable than VS30. However, given the expense and difficulty of obtaining reliable estimates of VS30 and the relative ease with which the fundamental site frequency can be computed, the fundamental site frequency may prove to be a valuable site response explanatory variable for many applications. ?? 2011 ASCE.
A Novel Residual Frequency Estimation Method for GNSS Receivers.
Nguyen, Tu Thi-Thanh; La, Vinh The; Ta, Tung Hai
2018-01-04
In Global Navigation Satellite System (GNSS) receivers, residual frequency estimation methods are traditionally applied in the synchronization block to reduce the transient time from acquisition to tracking, or they are used within the frequency estimator to improve its accuracy in open-loop architectures. There are several disadvantages in the current estimation methods, including sensitivity to noise and wide search space size. This paper proposes a new residual frequency estimation method depending on differential processing. Although the complexity of the proposed method is higher than the one of traditional methods, it can lead to more accurate estimates, without increasing the size of the search space.
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time series to detect when the geophysical signal, here the ground motion, emerges from the noise.
Fernández-Soto, Alicia; Martínez-Rodrigo, Arturo; Moncho-Bogani, José; Latorre, José Miguel; Fernández-Caballero, Antonio
2018-06-01
For the sake of establishing the neural correlates of phrase quadrature perception in harmonic rhythm, a musical experiment has been designed to induce music-evoked stimuli related to one important aspect of harmonic rhythm, namely the phrase quadrature. Brain activity is translated to action through electroencephalography (EEG) by using a brain-computer interface. The power spectral value of each EEG channel is estimated to obtain how power variance distributes as a function of frequency. The results of processing the acquired signals are in line with previous studies that use different musical parameters to induce emotions. Indeed, our experiment shows statistical differences in theta and alpha bands between the fulfillment and break of phrase quadrature, an important cue of harmonic rhythm, in two classical sonatas.
Phonon-based scalable platform for chip-scale quantum computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reinke, Charles M.; El-Kady, Ihab
Here, we present a scalable phonon-based quantum computer on a phononic crystal platform. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables coupling of the phonon modes to the energy levels of the atom. We show theoretical optimization of the cavity design and coupling waveguide, along with estimated performance figures of the coupled system. A qubit can be created by entangling a phonon at the resonance frequency of the cavity with the atom states. Qubits based on this half-sound, half-matter quasi-particle, called a phoniton,more » may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.« less
Phonon-based scalable platform for chip-scale quantum computing
Reinke, Charles M.; El-Kady, Ihab
2016-12-19
Here, we present a scalable phonon-based quantum computer on a phononic crystal platform. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables coupling of the phonon modes to the energy levels of the atom. We show theoretical optimization of the cavity design and coupling waveguide, along with estimated performance figures of the coupled system. A qubit can be created by entangling a phonon at the resonance frequency of the cavity with the atom states. Qubits based on this half-sound, half-matter quasi-particle, called a phoniton,more » may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.« less
Tortorelli, Robert L.
1997-01-01
Statewide regression equations for Oklahoma were determined for estimating peak discharge and flood frequency for selected recurrence intervals from 2 to 500 years for ungaged sites on natural unregulated streams. The most significant independent variables required to estimate peak-streamflow frequency for natural unregulated streams in Oklahoma are contributing drainage area, main-channel slope, and mean-annual precipitation. The regression equations are applicable for watersheds with drainage areas less than 2,510 square miles that are not affected by regulation from manmade works. Limitations on the use of the regression relations and the reliability of regression estimates for natural unregulated streams are discussed. Log-Pearson Type III analysis information, basin and climatic characteristics, and the peak-stream-flow frequency estimates for 251 gaging stations in Oklahoma and adjacent states are listed. Techniques are presented to make a peak-streamflow frequency estimate for gaged sites on natural unregulated streams and to use this result to estimate a nearby ungaged site on the same stream. For ungaged sites on urban streams, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow frequency. For ungaged sites on streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow frequency. The statewide regression equations are adjusted by substituting the drainage area below the floodwater retarding structures, or drainage area that represents the percentage of the unregulated basin, in the contributing drainage area parameter to obtain peak-streamflow frequency estimates.
Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison
2017-11-01
Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.
Controlled impact demonstration airframe bending bridges
NASA Technical Reports Server (NTRS)
Soltis, S. J.
1986-01-01
The calibration of the KRASH and DYCAST models for transport aircraft is discussed. The FAA uses computer analysis techniques to predict the response of controlled impact demonstration (CID) during impact. The moment bridges can provide a direct correlation between the predictive loads or moments that the models will predict and what was experienced during the actual impact. Another goal is to examine structural failure mechanisms and correlate with analytical predictions. The bending bridges did achieve their goals and objectives. The data traces do provide some insight with respect to airframe loads and structural response. They demonstrate quite clearly what's happening to the airframe. A direct quantification of metal airframe loads was measured by the moment bridges. The measured moments can be correlated with the KRASH and DYCAST computer models. The bending bridge data support airframe failure mechanisms analysis and provide residual airframe strength estimation. It did not appear as if any of the bending bridges on the airframe exceeded limit loads. (The observed airframe fracture was due to the fuselage encounter with the tomahawk which tore out the keel beam.) The airframe bridges can be used to estimate the impact conditions and those estimates are correlating with some of the other data measurements. Structural response, frequency and structural damping are readily measured by the moment bridges.
Rostami, Ali A; Pithawalla, Yezdi B; Liu, Jianmin; Oldham, Michael J; Wagner, Karl A; Frost-Pineda, Kimberly; Sarkar, Mohamadi A
2016-08-16
Concerns have been raised in the literature for the potential of secondhand exposure from e-vapor product (EVP) use. It would be difficult to experimentally determine the impact of various factors on secondhand exposure including, but not limited to, room characteristics (indoor space size, ventilation rate), device specifications (aerosol mass delivery, e-liquid composition), and use behavior (number of users and usage frequency). Therefore, a well-mixed computational model was developed to estimate the indoor levels of constituents from EVPs under a variety of conditions. The model is based on physical and thermodynamic interactions between aerosol, vapor, and air, similar to indoor air models referred to by the Environmental Protection Agency. The model results agree well with measured indoor air levels of nicotine from two sources: smoking machine-generated aerosol and aerosol exhaled from EVP use. Sensitivity analysis indicated that increasing air exchange rate reduces room air level of constituents, as more material is carried away. The effect of the amount of aerosol released into the space due to variability in exhalation was also evaluated. The model can estimate the room air level of constituents as a function of time, which may be used to assess the level of non-user exposure over time.