Sample records for statistical signal processing

  1. Adaptive filtering in biological signal processing.

    PubMed

    Iyer, V K; Ploysongsang, Y; Ramamoorthy, P A

    1990-01-01

    The high dependence of conventional optimal filtering methods on the a priori knowledge of the signal and noise statistics render them ineffective in dealing with signals whose statistics cannot be predetermined accurately. Adaptive filtering methods offer a better alternative, since the a priori knowledge of statistics is less critical, real time processing is possible, and the computations are less expensive for this approach. Adaptive filtering methods compute the filter coefficients "on-line", converging to the optimal values in the least-mean square (LMS) error sense. Adaptive filtering is therefore apt for dealing with the "unknown" statistics situation and has been applied extensively in areas like communication, speech, radar, sonar, seismology, and biological signal processing and analysis for channel equalization, interference and echo canceling, line enhancement, signal detection, system identification, spectral analysis, beamforming, modeling, control, etc. In this review article adaptive filtering in the context of biological signals is reviewed. An intuitive approach to the underlying theory of adaptive filters and its applicability are presented. Applications of the principles in biological signal processing are discussed in a manner that brings out the key ideas involved. Current and potential future directions in adaptive biological signal processing are also discussed.

  2. A Statistical Analysis of the Output Signals of an Acousto-Optic Spectrum Analyzer for CW (Continuous-Wave) Signals

    DTIC Science & Technology

    1988-10-01

    A statistical analysis on the output signals of an acousto - optic spectrum analyzer (AOSA) is performed for the case when the input signal is a...processing, Electronic warfare, Radar countermeasures, Acousto - optic , Spectrum analyzer, Statistical analysis, Detection, Estimation, Canada, Modelling.

  3. An overview of data acquisition, signal coding and data analysis techniques for MST radars

    NASA Technical Reports Server (NTRS)

    Rastogi, P. K.

    1986-01-01

    An overview is given of the data acquisition, signal processing, and data analysis techniques that are currently in use with high power MST/ST (mesosphere stratosphere troposphere/stratosphere troposphere) radars. This review supplements the works of Rastogi (1983) and Farley (1984) presented at previous MAP workshops. A general description is given of data acquisition and signal processing operations and they are characterized on the basis of their disparate time scales. Then signal coding, a brief description of frequently used codes, and their limitations are discussed, and finally, several aspects of statistical data processing such as signal statistics, power spectrum and autocovariance analysis, outlier removal techniques are discussed.

  4. Incorporating signal-dependent noise for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Morman, Christopher J.; Meola, Joseph

    2015-05-01

    The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.

  5. Signal Processing in Functional Near-Infrared Spectroscopy (fNIRS): Methodological Differences Lead to Different Statistical Results.

    PubMed

    Pfeifer, Mischa D; Scholkmann, Felix; Labruyère, Rob

    2017-01-01

    Even though research in the field of functional near-infrared spectroscopy (fNIRS) has been performed for more than 20 years, consensus on signal processing methods is still lacking. A significant knowledge gap exists between established researchers and those entering the field. One major issue regularly observed in publications from researchers new to the field is the failure to consider possible signal contamination by hemodynamic changes unrelated to neurovascular coupling (i.e., scalp blood flow and systemic blood flow). This might be due to the fact that these researchers use the signal processing methods provided by the manufacturers of their measurement device without an advanced understanding of the performed steps. The aim of the present study was to investigate how different signal processing approaches (including and excluding approaches that partially correct for the possible signal contamination) affect the results of a typical functional neuroimaging study performed with fNIRS. In particular, we evaluated one standard signal processing method provided by a commercial company and compared it to three customized approaches. We thereby investigated the influence of the chosen method on the statistical outcome of a clinical data set (task-evoked motor cortex activity). No short-channels were used in the present study and therefore two types of multi-channel corrections based on multiple long-channels were applied. The choice of the signal processing method had a considerable influence on the outcome of the study. While methods that ignored the contamination of the fNIRS signals by task-evoked physiological noise yielded several significant hemodynamic responses over the whole head, the statistical significance of these findings disappeared when accounting for part of the contamination using a multi-channel regression. We conclude that adopting signal processing methods that correct for physiological confounding effects might yield more realistic results in cases where multi-distance measurements are not possible. Furthermore, we recommend using manufacturers' standard signal processing methods only in case the user has an advanced understanding of every signal processing step performed.

  6. Demonstration of fundamental statistics by studying timing of electronics signals in a physics-based laboratory

    NASA Astrophysics Data System (ADS)

    Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.

    2017-07-01

    We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.

  7. Deviations from Rayleigh statistics in ultrasonic speckle.

    PubMed

    Tuthill, T A; Sperry, R H; Parker, K J

    1988-04-01

    The statistics of speckle patterns in ultrasound images have potential for tissue characterization. In "fully developed speckle" from many random scatterers, the amplitude is widely recognized as possessing a Rayleigh distribution. This study examines how scattering populations and signal processing can produce non-Rayleigh distributions. The first order speckle statistics are shown to depend on random scatterer density and the amplitude and spacing of added periodic scatterers. Envelope detection, amplifier compression, and signal bandwidth are also shown to cause distinct changes in the signal distribution.

  8. Robust estimators for speech enhancement in real environments

    NASA Astrophysics Data System (ADS)

    Sandoval-Ibarra, Yuma; Diaz-Ramirez, Victor H.; Kober, Vitaly

    2015-09-01

    Common statistical estimators for speech enhancement rely on several assumptions about stationarity of speech signals and noise. These assumptions may not always valid in real-life due to nonstationary characteristics of speech and noise processes. We propose new estimators based on existing estimators by incorporation of computation of rank-order statistics. The proposed estimators are better adapted to non-stationary characteristics of speech signals and noise processes. Through computer simulations we show that the proposed estimators yield a better performance in terms of objective metrics than that of known estimators when speech signals are contaminated with airport, babble, restaurant, and train-station noise.

  9. Vehicular headways on signalized intersections: theory, models, and reality

    NASA Astrophysics Data System (ADS)

    Krbálek, Milan; Šleis, Jiří

    2015-01-01

    We discuss statistical properties of vehicular headways measured on signalized crossroads. On the basis of mathematical approaches, we formulate theoretical and empirically inspired criteria for the acceptability of theoretical headway distributions. Sequentially, the multifarious families of statistical distributions (commonly used to fit real-road headway statistics) are confronted with these criteria, and with original empirical time clearances gauged among neighboring vehicles leaving signal-controlled crossroads after a green signal appears. Using three different numerical schemes, we demonstrate that an arrangement of vehicles on an intersection is a consequence of the general stochastic nature of queueing systems, rather than a consequence of traffic rules, driver estimation processes, or decision-making procedures.

  10. SIG-VISA: Signal-based Vertically Integrated Seismic Monitoring

    NASA Astrophysics Data System (ADS)

    Moore, D.; Mayeda, K. M.; Myers, S. C.; Russell, S.

    2013-12-01

    Traditional seismic monitoring systems rely on discrete detections produced by station processing software; however, while such detections may constitute a useful summary of station activity, they discard large amounts of information present in the original recorded signal. We present SIG-VISA (Signal-based Vertically Integrated Seismic Analysis), a system for seismic monitoring through Bayesian inference on seismic signals. By directly modeling the recorded signal, our approach incorporates additional information unavailable to detection-based methods, enabling higher sensitivity and more accurate localization using techniques such as waveform matching. SIG-VISA's Bayesian forward model of seismic signal envelopes includes physically-derived models of travel times and source characteristics as well as Gaussian process (kriging) statistical models of signal properties that combine interpolation of historical data with extrapolation of learned physical trends. Applying Bayesian inference, we evaluate the model on earthquakes as well as the 2009 DPRK test event, demonstrating a waveform matching effect as part of the probabilistic inference, along with results on event localization and sensitivity. In particular, we demonstrate increased sensitivity from signal-based modeling, in which the SIGVISA signal model finds statistical evidence for arrivals even at stations for which the IMS station processing failed to register any detection.

  11. Statistical analysis and digital processing of the Mössbauer spectra

    NASA Astrophysics Data System (ADS)

    Prochazka, Roman; Tucek, Pavel; Tucek, Jiri; Marek, Jaroslav; Mashlan, Miroslav; Pechousek, Jiri

    2010-02-01

    This work is focused on using the statistical methods and development of the filtration procedures for signal processing in Mössbauer spectroscopy. Statistical tools for noise filtering in the measured spectra are used in many scientific areas. The use of a pure statistical approach in accumulated Mössbauer spectra filtration is described. In Mössbauer spectroscopy, the noise can be considered as a Poisson statistical process with a Gaussian distribution for high numbers of observations. This noise is a superposition of the non-resonant photons counting with electronic noise (from γ-ray detection and discrimination units), and the velocity system quality that can be characterized by the velocity nonlinearities. The possibility of a noise-reducing process using a new design of statistical filter procedure is described. This mathematical procedure improves the signal-to-noise ratio and thus makes it easier to determine the hyperfine parameters of the given Mössbauer spectra. The filter procedure is based on a periodogram method that makes it possible to assign the statistically important components in the spectral domain. The significance level for these components is then feedback-controlled using the correlation coefficient test results. The estimation of the theoretical correlation coefficient level which corresponds to the spectrum resolution is performed. Correlation coefficient test is based on comparison of the theoretical and the experimental correlation coefficients given by the Spearman method. The correctness of this solution was analyzed by a series of statistical tests and confirmed by many spectra measured with increasing statistical quality for a given sample (absorber). The effect of this filter procedure depends on the signal-to-noise ratio and the applicability of this method has binding conditions.

  12. SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Floros, D

    Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less

  13. Does experimental pain affect auditory processing of speech-relevant signals? A study in healthy young adults.

    PubMed

    Sapir, Shimon; Pud, Dorit

    2008-01-01

    To assess the effect of tonic pain stimulation on auditory processing of speech-relevant acoustic signals in healthy pain-free volunteers. Sixty university students, randomly assigned to either a thermal pain stimulation (46 degrees C/6 min) group (PS) or no pain stimulation group (NPS), performed a rate change detection task (RCDT) involving sinusoidally frequency-modulated vowel-like signals. Task difficulty was manipulated by changing the rate of the modulated signals (henceforth rate). Perceived pain intensity was evaluated using a visual analog scale (VAS) (0-100). Mean pain rating was approximately 33 in the PS group and approximately 3 in the NPS group. Pain stimulation was associated with poorer performance on the RCDT, but this trend was not statistically significant. Performance worsened with increasing rate of signal modulation in both groups (p < 0.0001), with no pain by rate interaction. The present findings indicate a trend whereby mild or moderate pain appears to affect auditory processing of speech-relevant acoustic signals. This trend, however, was not statistically significant. It is possible that more intense pain would yield more pronounced (deleterious) effects on auditory processing, but this needs to be verified empirically.

  14. Machinery Bearing Fault Diagnosis Using Variational Mode Decomposition and Support Vector Machine as a Classifier

    NASA Astrophysics Data System (ADS)

    Rama Krishna, K.; Ramachandran, K. I.

    2018-02-01

    Crack propagation is a major cause of failure in rotating machines. It adversely affects the productivity, safety, and the machining quality. Hence, detecting the crack’s severity accurately is imperative for the predictive maintenance of such machines. Fault diagnosis is an established concept in identifying the faults, for observing the non-linear behaviour of the vibration signals at various operating conditions. In this work, we find the classification efficiencies for both original and the reconstructed vibrational signals. The reconstructed signals are obtained using Variational Mode Decomposition (VMD), by splitting the original signal into three intrinsic mode functional components and framing them accordingly. Feature extraction, feature selection and feature classification are the three phases in obtaining the classification efficiencies. All the statistical features from the original signals and reconstructed signals are found out in feature extraction process individually. A few statistical parameters are selected in feature selection process and are classified using the SVM classifier. The obtained results show the best parameters and appropriate kernel in SVM classifier for detecting the faults in bearings. Hence, we conclude that better results were obtained by VMD and SVM process over normal process using SVM. This is owing to denoising and filtering the raw vibrational signals.

  15. Goldstone radio spectrum signal identification, March 1980 - March 1982

    NASA Technical Reports Server (NTRS)

    Gaudian, B. A.

    1982-01-01

    The signal identification process is described. The Goldstone radio spectrum environment contains signals that are a potential source of electromagnetic interference to the Goldstone tracking receivers. The identification of these signals is accomplished by the use of signal parameters and environment parameters. Statistical data on the Goldstone radio spectrum environment from 2285 to 2305 MHz are provided.

  16. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  17. Evaluation of hardware costs of implementing PSK signal detection circuit based on "system on chip"

    NASA Astrophysics Data System (ADS)

    Sokolovskiy, A. V.; Dmitriev, D. D.; Veisov, E. A.; Gladyshev, A. B.

    2018-05-01

    The article deals with the choice of the architecture of digital signal processing units for implementing the PSK signal detection scheme. As an assessment of the effectiveness of architectures, the required number of shift registers and computational processes are used when implementing the "system on a chip" on the chip. A statistical estimation of the normalized code sequence offset in the signal synchronization scheme for various hardware block architectures is used.

  18. Statistical analysis of CCSN/SS7 traffic data from working CCS subnetworks

    NASA Astrophysics Data System (ADS)

    Duffy, Diane E.; McIntosh, Allen A.; Rosenstein, Mark; Willinger, Walter

    1994-04-01

    In this paper, we report on an ongoing statistical analysis of actual CCSN traffic data. The data consist of approximately 170 million signaling messages collected from a variety of different working CCS subnetworks. The key findings from our analysis concern: (1) the characteristics of both the telephone call arrival process and the signaling message arrival process; (2) the tail behavior of the call holding time distribution; and (3) the observed performance of the CCSN with respect to a variety of performance and reliability measurements.

  19. Study of photon correlation techniques for processing of laser velocimeter signals

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1977-01-01

    The objective was to provide the theory and a system design for a new type of photon counting processor for low level dual scatter laser velocimeter (LV) signals which would be capable of both the first order measurements of mean flow and turbulence intensity and also the second order time statistics: cross correlation auto correlation, and related spectra. A general Poisson process model for low level LV signals and noise which is valid from the photon-resolved regime all the way to the limiting case of nonstationary Gaussian noise was used. Computer simulation algorithms and higher order statistical moment analysis of Poisson processes were derived and applied to the analysis of photon correlation techniques. A system design using a unique dual correlate and subtract frequency discriminator technique is postulated and analyzed. Expectation analysis indicates that the objective measurements are feasible.

  20. Automated speech understanding: the next generation

    NASA Astrophysics Data System (ADS)

    Picone, J.; Ebel, W. J.; Deshmukh, N.

    1995-04-01

    Modern speech understanding systems merge interdisciplinary technologies from Signal Processing, Pattern Recognition, Natural Language, and Linguistics into a unified statistical framework. These systems, which have applications in a wide range of signal processing problems, represent a revolution in Digital Signal Processing (DSP). Once a field dominated by vector-oriented processors and linear algebra-based mathematics, the current generation of DSP-based systems rely on sophisticated statistical models implemented using a complex software paradigm. Such systems are now capable of understanding continuous speech input for vocabularies of several thousand words in operational environments. The current generation of deployed systems, based on small vocabularies of isolated words, will soon be replaced by a new technology offering natural language access to vast information resources such as the Internet, and provide completely automated voice interfaces for mundane tasks such as travel planning and directory assistance.

  1. Single photon laser altimeter simulator and statistical signal processing

    NASA Astrophysics Data System (ADS)

    Vacek, Michael; Prochazka, Ivan

    2013-05-01

    Spaceborne altimeters are common instruments onboard the deep space rendezvous spacecrafts. They provide range and topographic measurements critical in spacecraft navigation. Simultaneously, the receiver part may be utilized for Earth-to-satellite link, one way time transfer, and precise optical radiometry. The main advantage of single photon counting approach is the ability of processing signals with very low signal-to-noise ratio eliminating the need of large telescopes and high power laser source. Extremely small, rugged and compact microchip lasers can be employed. The major limiting factor, on the other hand, is the acquisition time needed to gather sufficient volume of data in repetitive measurements in order to process and evaluate the data appropriately. Statistical signal processing is adopted to detect signals with average strength much lower than one photon per measurement. A comprehensive simulator design and range signal processing algorithm are presented to identify a mission specific altimeter configuration. Typical mission scenarios (celestial body surface landing and topographical mapping) are simulated and evaluated. The high interest and promising single photon altimeter applications are low-orbit (˜10 km) and low-radial velocity (several m/s) topographical mapping (asteroids, Phobos and Deimos) and landing altimetry (˜10 km) where range evaluation repetition rates of ˜100 Hz and 0.1 m precision may be achieved. Moon landing and asteroid Itokawa topographical mapping scenario simulations are discussed in more detail.

  2. Statistical and Adaptive Signal Processing for UXO Discrimination for Next-Generation Sensor Data

    DTIC Science & Technology

    2009-09-01

    using the energies of all polarizations as features in a KNN classifier variant resulted in 100% probability of detection at a probability of false...International Conference on Acoustics, Speech , and Signal Processing, vol. V, 2005, pp. 885-888. [12] C. Kreucher, K. Kastella, and A. O. Hero

  3. Moment-Based Physical Models of Broadband Clutter due to Aggregations of Fish

    DTIC Science & Technology

    2013-09-30

    statistical models for signal-processing algorithm development. These in turn will help to develop a capability to statistically forecast the impact of...aggregations of fish based on higher-order statistical measures describable in terms of physical and system parameters. Environmentally , these models...processing. In this experiment, we had good ground truth on (1) and (2), and had control over (3) and (4) except for environmentally -imposed restrictions

  4. Damage localization by statistical evaluation of signal-processed mode shapes

    NASA Astrophysics Data System (ADS)

    Ulriksen, M. D.; Damkilde, L.

    2015-07-01

    Due to their inherent, ability to provide structural information on a local level, mode shapes and t.lieir derivatives are utilized extensively for structural damage identification. Typically, more or less advanced mathematical methods are implemented to identify damage-induced discontinuities in the spatial mode shape signals, hereby potentially facilitating damage detection and/or localization. However, by being based on distinguishing damage-induced discontinuities from other signal irregularities, an intrinsic deficiency in these methods is the high sensitivity towards measurement, noise. The present, article introduces a damage localization method which, compared to the conventional mode shape-based methods, has greatly enhanced robustness towards measurement, noise. The method is based on signal processing of spatial mode shapes by means of continuous wavelet, transformation (CWT) and subsequent, application of a generalized discrete Teager-Kaiser energy operator (GDTKEO) to identify damage-induced mode shape discontinuities. In order to evaluate whether the identified discontinuities are in fact, damage-induced, outlier analysis of principal components of the signal-processed mode shapes is conducted on the basis of T2-statistics. The proposed method is demonstrated in the context, of analytical work with a free-vibrating Euler-Bernoulli beam under noisy conditions.

  5. Envelope statistics of self-motion signals experienced by human subjects during everyday activities: Implications for vestibular processing.

    PubMed

    Carriot, Jérome; Jamali, Mohsen; Cullen, Kathleen E; Chacron, Maurice J

    2017-01-01

    There is accumulating evidence that the brain's neural coding strategies are constrained by natural stimulus statistics. Here we investigated the statistics of the time varying envelope (i.e. a second-order stimulus attribute that is related to variance) of rotational and translational self-motion signals experienced by human subjects during everyday activities. We found that envelopes can reach large values across all six motion dimensions (~450 deg/s for rotations and ~4 G for translations). Unlike results obtained in other sensory modalities, the spectral power of envelope signals decreased slowly for low (< 2 Hz) and more sharply for high (>2 Hz) temporal frequencies and thus was not well-fit by a power law. We next compared the spectral properties of envelope signals resulting from active and passive self-motion, as well as those resulting from signals obtained when the subject is absent (i.e. external stimuli). Our data suggest that different mechanisms underlie deviation from scale invariance in rotational and translational self-motion envelopes. Specifically, active self-motion and filtering by the human body cause deviation from scale invariance primarily for translational and rotational envelope signals, respectively. Finally, we used well-established models in order to predict the responses of peripheral vestibular afferents to natural envelope stimuli. We found that irregular afferents responded more strongly to envelopes than their regular counterparts. Our findings have important consequences for understanding the coding strategies used by the vestibular system to process natural second-order self-motion signals.

  6. Two-dimensional signal processing with application to image restoration

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1974-01-01

    A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.

  7. Investigation of optical current transformer signal processing method based on an improved Kalman algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan

    2018-01-01

    This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.

  8. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, K.C.; Hoyer, K.K.; Humenik, K.E.

    1995-10-17

    A method and system for monitoring an industrial process and a sensor are disclosed. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.

  9. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, K.C.; Hoyer, K.K.; Humenik, K.E.

    1997-05-13

    A method and system are disclosed for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.

  10. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, Kenneth C.; Hoyer, Kristin K.; Humenik, Keith E.

    1995-01-01

    A method and system for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.

  11. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, Kenneth C.; Hoyer, Kristin K.; Humenik, Keith E.

    1997-01-01

    A method and system for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.

  12. A Novel Signal Modeling Approach for Classification of Seizure and Seizure-Free EEG Signals.

    PubMed

    Gupta, Anubha; Singh, Pushpendra; Karlekar, Mandar

    2018-05-01

    This paper presents a signal modeling-based new methodology of automatic seizure detection in EEG signals. The proposed method consists of three stages. First, a multirate filterbank structure is proposed that is constructed using the basis vectors of discrete cosine transform. The proposed filterbank decomposes EEG signals into its respective brain rhythms: delta, theta, alpha, beta, and gamma. Second, these brain rhythms are statistically modeled with the class of self-similar Gaussian random processes, namely, fractional Brownian motion and fractional Gaussian noises. The statistics of these processes are modeled using a single parameter called the Hurst exponent. In the last stage, the value of Hurst exponent and autoregressive moving average parameters are used as features to design a binary support vector machine classifier to classify pre-ictal, inter-ictal (epileptic with seizure free interval), and ictal (seizure) EEG segments. The performance of the classifier is assessed via extensive analysis on two widely used data set and is observed to provide good accuracy on both the data set. Thus, this paper proposes a novel signal model for EEG data that best captures the attributes of these signals and hence, allows to boost the classification accuracy of seizure and seizure-free epochs.

  13. Data processing techniques used with MST radars: A review

    NASA Technical Reports Server (NTRS)

    Rastogi, P. K.

    1983-01-01

    The data processing methods used in high power radar probing of the middle atmosphere are examined. The radar acts as a spatial filter on the small scale refractivity fluctuations in the medium. The characteristics of the received signals are related to the statistical properties of these fluctuations. A functional outline of the components of a radar system is given. Most computation intensive tasks are carried out by the processor. The processor computes a statistical function of the received signals, simultaneously for a large number of ranges. The slow fading of atmospheric signals is used to reduce the data input rate to the processor by coherent integration. The inherent range resolution of the radar experiments can be improved significant with the use of pseudonoise phase codes to modulate the transmitted pulses and a corresponding decoding operation on the received signals. Commutability of the decoding and coherent integration operations is used to obtain a significant reduction in computations. The limitations of the processors are outlined. At the next level of data reduction, the measured function is parameterized by a few spectral moments that can be related to physical processes in the medium. The problems encountered in estimating the spectral moments in the presence of strong ground clutter, external interference, and noise are discussed. The graphical and statistical analysis of the inferred parameters are outlined. The requirements for special purpose processors for MST radars are discussed.

  14. Analysis of the sleep quality of elderly people using biomedical signals.

    PubMed

    Moreno-Alsasua, L; Garcia-Zapirain, B; Mendez-Zorrilla, A

    2015-01-01

    This paper presents a technical solution that analyses sleep signals captured by biomedical sensors to find possible disorders during rest. Specifically, the method evaluates electrooculogram (EOG) signals, skin conductance (GSR), air flow (AS), and body temperature. Next, a quantitative sleep quality analysis determines significant changes in the biological signals, and any similarities between them in a given time period. Filtering techniques such as the Fourier transform method and IIR filters process the signal and identify significant variations. Once these changes have been identified, all significant data is compared and a quantitative and statistical analysis is carried out to determine the level of a person's rest. To evaluate the correlation and significant differences, a statistical analysis has been calculated showing correlation between EOG and AS signals (p=0,005), EOG, and GSR signals (p=0,037) and, finally, the EOG and Body temperature (p=0,04). Doctors could use this information to monitor changes within a patient.

  15. Doubly stochastic Poisson processes in artificial neural learning.

    PubMed

    Card, H C

    1998-01-01

    This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.

  16. Statistical process control: separating signal from noise in emergency department operations.

    PubMed

    Pimentel, Laura; Barrueto, Fermin

    2015-05-01

    Statistical process control (SPC) is a visually appealing and statistically rigorous methodology very suitable to the analysis of emergency department (ED) operations. We demonstrate that the control chart is the primary tool of SPC; it is constructed by plotting data measuring the key quality indicators of operational processes in rationally ordered subgroups such as units of time. Control limits are calculated using formulas reflecting the variation in the data points from one another and from the mean. SPC allows managers to determine whether operational processes are controlled and predictable. We review why the moving range chart is most appropriate for use in the complex ED milieu, how to apply SPC to ED operations, and how to determine when performance improvement is needed. SPC is an excellent tool for operational analysis and quality improvement for these reasons: 1) control charts make large data sets intuitively coherent by integrating statistical and visual descriptions; 2) SPC provides analysis of process stability and capability rather than simple comparison with a benchmark; 3) SPC allows distinction between special cause variation (signal), indicating an unstable process requiring action, and common cause variation (noise), reflecting a stable process; and 4) SPC keeps the focus of quality improvement on process rather than individual performance. Because data have no meaning apart from their context, and every process generates information that can be used to improve it, we contend that SPC should be seriously considered for driving quality improvement in emergency medicine. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  18. Optimal causal filtering for 1 /fα-type noise in single-electrode EEG signals.

    PubMed

    Paris, Alan; Atia, George; Vosoughi, Azadeh; Berman, Stephen A

    2016-08-01

    Understanding the mode of generation and the statistical structure of neurological noise is one of the central problems of biomedical signal processing. We have developed a broad class of abstract biological noise sources we call hidden simplicial tissues. In the simplest cases, such tissue emits what we have named generalized van der Ziel-McWhorter (GVZM) noise which has a roughly 1/fα spectral roll-off. Our previous work focused on the statistical structure of GVZM frequency spectra. However, causality of processing operations (i.e., dependence only on the past) is an essential requirement for real-time applications to seizure detection and brain-computer interfacing. In this paper we outline the theoretical background for optimal causal time-domain filtering of deterministic signals embedded in GVZM noise. We present some of our early findings concerning the optimal filtering of EEG signals for the detection of steady-state visual evoked potential (SSVEP) responses and indicate the next steps in our ongoing research.

  19. Some new results on the statistics of radio wave scintillation. I - Empirical evidence for Gaussian statistics

    NASA Technical Reports Server (NTRS)

    Rino, C. L.; Livingston, R. C.; Whitney, H. E.

    1976-01-01

    This paper presents an analysis of ionospheric scintillation data which shows that the underlying statistical structure of the signal can be accurately modeled by the additive complex Gaussian perturbation predicted by the Born approximation in conjunction with an application of the central limit theorem. By making use of this fact, it is possible to estimate the in-phase, phase quadrature, and cophased scattered power by curve fitting to measured intensity histograms. By using this procedure, it is found that typically more than 80% of the scattered power is in phase quadrature with the undeviated signal component. Thus, the signal is modeled by a Gaussian, but highly non-Rician process. From simultaneous UHF and VHF data, only a weak dependence of this statistical structure on changes in the Fresnel radius is deduced. The signal variance is found to have a nonquadratic wavelength dependence. It is hypothesized that this latter effect is a subtle manifestation of locally homogeneous irregularity structures, a mathematical model proposed by Kolmogorov (1941) in his early studies of incompressible fluid turbulence.

  20. Correlative weighted stacking for seismic data in the wavelet domain

    USGS Publications Warehouse

    Zhang, S.; Xu, Y.; Xia, J.; ,

    2004-01-01

    Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.

  1. Envelope statistics of self-motion signals experienced by human subjects during everyday activities: Implications for vestibular processing

    PubMed Central

    Carriot, Jérome; Jamali, Mohsen; Cullen, Kathleen E.

    2017-01-01

    There is accumulating evidence that the brain’s neural coding strategies are constrained by natural stimulus statistics. Here we investigated the statistics of the time varying envelope (i.e. a second-order stimulus attribute that is related to variance) of rotational and translational self-motion signals experienced by human subjects during everyday activities. We found that envelopes can reach large values across all six motion dimensions (~450 deg/s for rotations and ~4 G for translations). Unlike results obtained in other sensory modalities, the spectral power of envelope signals decreased slowly for low (< 2 Hz) and more sharply for high (>2 Hz) temporal frequencies and thus was not well-fit by a power law. We next compared the spectral properties of envelope signals resulting from active and passive self-motion, as well as those resulting from signals obtained when the subject is absent (i.e. external stimuli). Our data suggest that different mechanisms underlie deviation from scale invariance in rotational and translational self-motion envelopes. Specifically, active self-motion and filtering by the human body cause deviation from scale invariance primarily for translational and rotational envelope signals, respectively. Finally, we used well-established models in order to predict the responses of peripheral vestibular afferents to natural envelope stimuli. We found that irregular afferents responded more strongly to envelopes than their regular counterparts. Our findings have important consequences for understanding the coding strategies used by the vestibular system to process natural second-order self-motion signals. PMID:28575032

  2. Power-law statistics of neurophysiological processes analyzed using short signals

    NASA Astrophysics Data System (ADS)

    Pavlova, Olga N.; Runnova, Anastasiya E.; Pavlov, Alexey N.

    2018-04-01

    We discuss the problem of quantifying power-law statistics of complex processes from short signals. Based on the analysis of electroencephalograms (EEG) we compare three interrelated approaches which enable characterization of the power spectral density (PSD) and show that an application of the detrended fluctuation analysis (DFA) or the wavelet-transform modulus maxima (WTMM) method represents a useful way of indirect characterization of the PSD features from short data sets. We conclude that despite DFA- and WTMM-based measures can be obtained from the estimated PSD, these tools outperform the standard spectral analysis when characterization of the analyzed regime should be provided based on a very limited amount of data.

  3. Hardware design and implementation of fast DOA estimation method based on multicore DSP

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-10-01

    In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.

  4. Acoustic firearm discharge detection and classification in an enclosed environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luzi, Lorenzo; Gonzalez, Eric; Bruillard, Paul

    2016-05-01

    Two different signal processing algorithms are described for detection and classification of acoustic signals generated by firearm discharges in small enclosed spaces. The first is based on the logarithm of the signal energy. The second is a joint entropy. The current study indicates that a system using both signal energy and joint entropy would be able to both detect weapon discharges and classify weapon type, in small spaces, with high statistical certainty.

  5. Nuclear Explosion and Infrasound Event Resources of the SMDC Monitoring Research Program

    DTIC Science & Technology

    2008-09-01

    2008 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies 928 Figure 7. Dozens of detected infrasound signals from...investigate alternative detection schemes at the two infrasound arrays based on frequency-wavenumber (fk) processing and the F-statistic. The results of... infrasound signal - detection processing schemes. REFERENCES Bahavar, M., B. Barker, J. Bennett, R. Bowman, H. Israelsson, B. Kohl, Y-L. Kung, J. Murphy

  6. Spiked Models of Large Dimensional Random Matrices Applied to Wireless Communications and Array Signal Processing

    DTIC Science & Technology

    2013-12-14

    population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC

  7. Sensors and signal processing for high accuracy passenger counting : final report.

    DOT National Transportation Integrated Search

    2009-03-05

    It is imperative for a transit system to track statistics about their ridership in order to plan bus routes. There exists a wide variety of methods for obtaining these statistics that range from relying on the driver to count people to utilizing came...

  8. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    PubMed

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  9. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance

    PubMed Central

    Poplová, Michaela; Sovka, Pavel

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal. PMID:29216207

  10. Fade durations in satellite-path mobile radio propagation

    NASA Technical Reports Server (NTRS)

    Schmier, Robert G.; Bostian, Charles W.

    1986-01-01

    Fades on satellite to land mobile radio links are caused by several factors, the most important of which are multipath propagation and vegetative shadowing. Designers of vehicular satellite communications systems require information about the statistics of fade durations in order to overcome or compensate for the fades. Except for a few limiting cases, only the mean fade duration can be determined analytically, and all other statistics must be obtained experimentally or via simulation. This report describes and presents results from a computer program developed at Virginia Tech to simulate satellite path propagation of a mobile station in a rural area. It generates rapidly-fading and slowly-fading signals by separate processes that yield correct cumulative signal distributions and then combines these to simulate the overall signal. This is then analyzed to yield the statistics of fade duration.

  11. Is complex signal processing for bone conduction hearing aids useful?

    PubMed

    Kompis, Martin; Kurz, Anja; Pfiffner, Flurin; Senn, Pascal; Arnold, Andreas; Caversaccio, Marco

    2014-05-01

    To establish whether complex signal processing is beneficial for users of bone anchored hearing aids. Review and analysis of two studies from our own group, each comparing a speech processor with basic digital signal processing (either Baha Divino or Baha Intenso) and a processor with complex digital signal processing (either Baha BP100 or Baha BP110 power). The main differences between basic and complex signal processing are the number of audiologist accessible frequency channels and the availability and complexity of the directional multi-microphone noise reduction and loudness compression systems. Both studies show a small, statistically non-significant improvement of speech understanding in quiet with the complex digital signal processing. The average improvement for speech in noise is +0.9 dB, if speech and noise are emitted both from the front of the listener. If noise is emitted from the rear and speech from the front of the listener, the advantage of the devices with complex digital signal processing as opposed to those with basic signal processing increases, on average, to +3.2 dB (range +2.3 … +5.1 dB, p ≤ 0.0032). Complex digital signal processing does indeed improve speech understanding, especially in noise coming from the rear. This finding has been supported by another study, which has been published recently by a different research group. When compared to basic digital signal processing, complex digital signal processing can increase speech understanding of users of bone anchored hearing aids. The benefit is most significant for speech understanding in noise.

  12. Statistical generation of training sets for measuring NO3(-), NH4(+) and major ions in natural waters using an ion selective electrode array.

    PubMed

    Mueller, Amy V; Hemond, Harold F

    2016-05-18

    Knowledge of ionic concentrations in natural waters is essential to understand watershed processes. Inorganic nitrogen, in the form of nitrate and ammonium ions, is a key nutrient as well as a participant in redox, acid-base, and photochemical processes of natural waters, leading to spatiotemporal patterns of ion concentrations at scales as small as meters or hours. Current options for measurement in situ are costly, relying primarily on instruments adapted from laboratory methods (e.g., colorimetric, UV absorption); free-standing and inexpensive ISE sensors for NO3(-) and NH4(+) could be attractive alternatives if interferences from other constituents were overcome. Multi-sensor arrays, coupled with appropriate non-linear signal processing, offer promise in this capacity but have not yet successfully achieved signal separation for NO3(-) and NH4(+)in situ at naturally occurring levels in unprocessed water samples. A novel signal processor, underpinned by an appropriate sensor array, is proposed that overcomes previous limitations by explicitly integrating basic chemical constraints (e.g., charge balance). This work further presents a rationalized process for the development of such in situ instrumentation for NO3(-) and NH4(+), including a statistical-modeling strategy for instrument design, training/calibration, and validation. Statistical analysis reveals that historical concentrations of major ionic constituents in natural waters across New England strongly covary and are multi-modal. This informs the design of a statistically appropriate training set, suggesting that the strong covariance of constituents across environmental samples can be exploited through appropriate signal processing mechanisms to further improve estimates of minor constituents. Two artificial neural network architectures, one expanded to incorporate knowledge of basic chemical constraints, were tested to process outputs of a multi-sensor array, trained using datasets of varying degrees of statistical representativeness to natural water samples. The accuracy of ANN results improves monotonically with the statistical representativeness of the training set (error decreases by ∼5×), while the expanded neural network architecture contributes a further factor of 2-3.5 decrease in error when trained with the most representative sample set. Results using the most statistically accurate set of training samples (which retain environmentally relevant ion concentrations but avoid the potential interference of humic acids) demonstrated accurate, unbiased quantification of nitrate and ammonium at natural environmental levels (±20% down to <10 μM), as well as the major ions Na(+), K(+), Ca(2+), Mg(2+), Cl(-), and SO4(2-), in unprocessed samples. These results show promise for the development of new in situ instrumentation for the support of scientific field work.

  13. Signal Processing Methods for Liquid Rocket Engine Combustion Stability Assessments

    NASA Technical Reports Server (NTRS)

    Kenny, R. Jeremy; Lee, Erik; Hulka, James R.; Casiano, Matthew

    2011-01-01

    The J2X Gas Generator engine design specifications include dynamic, spontaneous, and broadband combustion stability requirements. These requirements are verified empirically based high frequency chamber pressure measurements and analyses. Dynamic stability is determined with the dynamic pressure response due to an artificial perturbation of the combustion chamber pressure (bomb testing), and spontaneous and broadband stability are determined from the dynamic pressure responses during steady operation starting at specified power levels. J2X Workhorse Gas Generator testing included bomb tests with multiple hardware configurations and operating conditions, including a configuration used explicitly for engine verification test series. This work covers signal processing techniques developed at Marshall Space Flight Center (MSFC) to help assess engine design stability requirements. Dynamic stability assessments were performed following both the CPIA 655 guidelines and a MSFC in-house developed statistical-based approach. The statistical approach was developed to better verify when the dynamic pressure amplitudes corresponding to a particular frequency returned back to pre-bomb characteristics. This was accomplished by first determining the statistical characteristics of the pre-bomb dynamic levels. The pre-bomb statistical characterization provided 95% coverage bounds; these bounds were used as a quantitative measure to determine when the post-bomb signal returned to pre-bomb conditions. The time for post-bomb levels to acceptably return to pre-bomb levels was compared to the dominant frequency-dependent time recommended by CPIA 655. Results for multiple test configurations, including stable and unstable configurations, were reviewed. Spontaneous stability was assessed using two processes: 1) characterization of the ratio of the peak response amplitudes to the excited chamber acoustic mode amplitudes and 2) characterization of the variability of the peak response's frequency over the test duration. This characterization process assists in evaluating the discreteness of a signal as well as the stability of the chamber response. Broadband stability was assessed using a running root-mean-square evaluation. These techniques were also employed, in a comparative analysis, on available Fastrac data, and these results are presented here.

  14. Fractal Signals & Space-Time Cartoons

    NASA Astrophysics Data System (ADS)

    Oetama, H. C. Jakob; Maksoed, W. H.

    2016-03-01

    In ``Theory of Scale Relativity'', 1991- L. Nottale states whereas ``scale relativity is a geometrical & fractal space-time theory''. It took in comparisons to ``a unified, wavelet based framework for efficiently synthetizing, analyzing ∖7 processing several broad classes of fractal signals''-Gregory W. Wornell:``Signal Processing with Fractals'', 1995. Furthers, in Fig 1.1. a simple waveform from statistically scale-invariant random process [ibid.,h 3 ]. Accompanying RLE Technical Report 566 ``Synthesis, Analysis & Processing of Fractal Signals'' as well as from Wornell, Oct 1991 herewith intended to deducts =a Δt + (1 - β Δ t) ...in Petersen, et.al: ``Scale invariant properties of public debt growth'',2010 h. 38006p2 to [1/{1- (2 α (λ) /3 π) ln (λ/r)}depicts in Laurent Nottale,1991, h 24. Acknowledgment devotes to theLates HE. Mr. BrigadierGeneral-TNI[rtd].Prof. Ir. HANDOJO.

  15. Statistical Signal Process in R Language in the Pharmacovigilance Programme of India.

    PubMed

    Kumar, Aman; Ahuja, Jitin; Shrivastava, Tarani Prakash; Kumar, Vipin; Kalaiselvan, Vivekanandan

    2018-05-01

    The Ministry of Health & Family Welfare, Government of India, initiated the Pharmacovigilance Programme of India (PvPI) in July 2010. The purpose of the PvPI is to collect data on adverse reactions due to medications, analyze it, and use the reference to recommend informed regulatory intervention, besides communicating the risk to health care professionals and the public. The goal of the present study was to apply statistical tools to find the relationship between drugs and ADRs for signal detection by R programming. Four statistical parameters were proposed for quantitative signal detection. These 4 parameters are IC 025 , PRR and PRR lb , chi-square, and N 11 ; we calculated these 4 values using R programming. We analyzed 78,983 drug-ADR combinations, and the total count of drug-ADR combination was 4,20,060. During the calculation of the statistical parameter, we use 3 variables: (1) N 11 (number of counts), (2) N 1. (Drug margin), and (3) N .1 (ADR margin). The structure and calculation of these 4 statistical parameters in R language are easily understandable. On the basis of the IC value (IC value >0), out of the 78,983 drug-ADR combination (drug-ADR combination), we found the 8,667 combinations to be significantly associated. The calculation of statistical parameters in R language is time saving and allows to easily identify new signals in the Indian ICSR (Individual Case Safety Reports) database.

  16. Object Classification Based on Analysis of Spectral Characteristics of Seismic Signal Envelopes

    NASA Astrophysics Data System (ADS)

    Morozov, Yu. V.; Spektor, A. A.

    2017-11-01

    A method for classifying moving objects having a seismic effect on the ground surface is proposed which is based on statistical analysis of the envelopes of received signals. The values of the components of the amplitude spectrum of the envelopes obtained applying Hilbert and Fourier transforms are used as classification criteria. Examples illustrating the statistical properties of spectra and the operation of the seismic classifier are given for an ensemble of objects of four classes (person, group of people, large animal, vehicle). It is shown that the computational procedures for processing seismic signals are quite simple and can therefore be used in real-time systems with modest requirements for computational resources.

  17. Multiplicative point process as a model of trading activity

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  18. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  19. Advances in optical information processing IV; Proceedings of the Meeting, Orlando, FL, Apr. 18-20, 1990

    NASA Astrophysics Data System (ADS)

    Pape, Dennis R.

    1990-09-01

    The present conference discusses topics in optical image processing, optical signal processing, acoustooptic spectrum analyzer systems and components, and optical computing. Attention is given to tradeoffs in nonlinearly recorded matched filters, miniature spatial light modulators, detection and classification using higher-order statistics of optical matched filters, rapid traversal of an image data base using binary synthetic discriminant filters, wideband signal processing for emitter location, an acoustooptic processor for autonomous SAR guidance, and sampling of Fresnel transforms. Also discussed are an acoustooptic RF signal-acquisition system, scanning acoustooptic spectrum analyzers, the effects of aberrations on acoustooptic systems, fast optical digital arithmetic processors, information utilization in analog and digital processing, optical processors for smart structures, and a self-organizing neural network for unsupervised learning.

  20. A novelty detection diagnostic methodology for gearboxes operating under fluctuating operating conditions using probabilistic techniques

    NASA Astrophysics Data System (ADS)

    Schmidt, S.; Heyns, P. S.; de Villiers, J. P.

    2018-02-01

    In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.

  1. Evaluation of higher order statistics parameters for multi channel sEMG using different force levels.

    PubMed

    Naik, Ganesh R; Kumar, Dinesh K

    2011-01-01

    The electromyograpy (EMG) signal provides information about the performance of muscles and nerves. The shape of the muscle signal and motor unit action potential (MUAP) varies due to the movement of the position of the electrode or due to changes in contraction level. This research deals with evaluating the non-Gaussianity in Surface Electromyogram signal (sEMG) using higher order statistics (HOS) parameters. To achieve this, experiments were conducted for four different finger and wrist actions at different levels of Maximum Voluntary Contractions (MVCs). Our experimental analysis shows that at constant force and for non-fatiguing contractions, probability density functions (PDF) of sEMG signals were non-Gaussian. For lesser MVCs (below 30% of MVC) PDF measures tends to be Gaussian process. The above measures were verified by computing the Kurtosis values for different MVCs.

  2. Signal quality and Bayesian signal processing in neurofeedback based on real-time fMRI.

    PubMed

    Koush, Yury; Zvyagintsev, Mikhail; Dyck, Miriam; Mathiak, Krystyna A; Mathiak, Klaus

    2012-01-02

    Real-time fMRI allows analysis and visualization of the brain activity online, i.e. within one repetition time. It can be used in neurofeedback applications where subjects attempt to control an activation level in a specified region of interest (ROI) of their brain. The signal derived from the ROI is contaminated with noise and artifacts, namely with physiological noise from breathing and heart beat, scanner drift, motion-related artifacts and measurement noise. We developed a Bayesian approach to reduce noise and to remove artifacts in real-time using a modified Kalman filter. The system performs several signal processing operations: subtraction of constant and low-frequency signal components, spike removal and signal smoothing. Quantitative feedback signal quality analysis was used to estimate the quality of the neurofeedback time series and performance of the applied signal processing on different ROIs. The signal-to-noise ratio (SNR) across the entire time series and the group event-related SNR (eSNR) were significantly higher for the processed time series in comparison to the raw data. Applied signal processing improved the t-statistic increasing the significance of blood oxygen level-dependent (BOLD) signal changes. Accordingly, the contrast-to-noise ratio (CNR) of the feedback time series was improved as well. In addition, the data revealed increase of localized self-control across feedback sessions. The new signal processing approach provided reliable neurofeedback, performed precise artifacts removal, reduced noise, and required minimal manual adjustments of parameters. Advanced and fast online signal processing algorithms considerably increased the quality as well as the information content of the control signal which in turn resulted in higher contingency in the neurofeedback loop. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Adaptive interference cancel filter for evoked potential using high-order cumulants.

    PubMed

    Lin, Bor-Shyh; Lin, Bor-Shing; Chong, Fok-Ching; Lai, Feipei

    2004-01-01

    This paper is to present evoked potential (EP) processing using adaptive interference cancel (AIC) filter with second and high order cumulants. In conventional ensemble averaging method, people have to conduct repetitively experiments to record the required data. Recently, the use of AIC structure with second statistics in processing EP has proved more efficiency than traditional averaging method, but it is sensitive to both of the reference signal statistics and the choice of step size. Thus, we proposed higher order statistics-based AIC method to improve these disadvantages. This study was experimented in somatosensory EP corrupted with EEG. Gradient type algorithm is used in AIC method. Comparisons with AIC filter on second, third, fourth order statistics are also presented in this paper. We observed that AIC filter with third order statistics has better convergent performance for EP processing and is not sensitive to the selection of step size and reference input.

  4. UWB pulse detection and TOA estimation using GLRT

    NASA Astrophysics Data System (ADS)

    Xie, Yan; Janssen, Gerard J. M.; Shakeri, Siavash; Tiberius, Christiaan C. J. M.

    2017-12-01

    In this paper, a novel statistical approach is presented for time-of-arrival (TOA) estimation based on first path (FP) pulse detection using a sub-Nyquist sampling ultra-wide band (UWB) receiver. The TOA measurement accuracy, which cannot be improved by averaging of the received signal, can be enhanced by the statistical processing of a number of TOA measurements. The TOA statistics are modeled and analyzed for a UWB receiver using threshold crossing detection of a pulse signal with noise. The detection and estimation scheme based on the Generalized Likelihood Ratio Test (GLRT) detector, which captures the full statistical information of the measurement data, is shown to achieve accurate TOA estimation and allows for a trade-off between the threshold level, the noise level, the amplitude and the arrival time of the first path pulse, and the accuracy of the obtained final TOA.

  5. Novel sonar signal processing tool using Shannon entropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quazi, A.H.

    1996-06-01

    Traditionally, conventional signal processing extracts information from sonar signals using amplitude, signal energy or frequency domain quantities obtained using spectral analysis techniques. The object is to investigate an alternate approach which is entirely different than that of traditional signal processing. This alternate approach is to utilize the Shannon entropy as a tool for the processing of sonar signals with emphasis on detection, classification, and localization leading to superior sonar system performance. Traditionally, sonar signals are processed coherently, semi-coherently, and incoherently, depending upon the a priori knowledge of the signals and noise. Here, the detection, classification, and localization technique will bemore » based on the concept of the entropy of the random process. Under a constant energy constraint, the entropy of a received process bearing finite number of sample points is maximum when hypothesis H{sub 0} (that the received process consists of noise alone) is true and decreases when correlated signal is present (H{sub 1}). Therefore, the strategy used for detection is: (I) Calculate the entropy of the received data; then, (II) compare the entropy with the maximum value; and, finally, (III) make decision: H{sub 1} is assumed if the difference is large compared to pre-assigned threshold and H{sub 0} is otherwise assumed. The test statistics will be different between entropies under H{sub 0} and H{sub 1}. Here, we shall show the simulated results for detecting stationary and non-stationary signals in noise, and results on detection of defects in a Plexiglas bar using an ultrasonic experiment conducted by Hughes. {copyright} {ital 1996 American Institute of Physics.}« less

  6. Application of machine learning and expert systems to Statistical Process Control (SPC) chart interpretation

    NASA Technical Reports Server (NTRS)

    Shewhart, Mark

    1991-01-01

    Statistical Process Control (SPC) charts are one of several tools used in quality control. Other tools include flow charts, histograms, cause and effect diagrams, check sheets, Pareto diagrams, graphs, and scatter diagrams. A control chart is simply a graph which indicates process variation over time. The purpose of drawing a control chart is to detect any changes in the process signalled by abnormal points or patterns on the graph. The Artificial Intelligence Support Center (AISC) of the Acquisition Logistics Division has developed a hybrid machine learning expert system prototype which automates the process of constructing and interpreting control charts.

  7. Inelastic Single Pion Signal Study in T2K νe Appearance using Modified Decay Electron Cut

    NASA Astrophysics Data System (ADS)

    Iwamoto, Konosuke; T2K Collaboration

    2015-04-01

    The T2K long-baseline neutrino experiment uses sophisticated selection criteria to identify the neutrino oscillation signals among the events reconstructed in the Super-Kamiokande (SK) detector for νe and νμ appearance and disappearance analyses. In current analyses, charged-current quasi-elastic (CCQE) events are used as the signal reaction in the SK detector because the energy can be precisely reconstructed. This talk presents an approach to increase the statistics of the oscillation analysis by including non-CCQE events with one Michel electron and reconstruct them as the inelastic single pion productions. The increase in statistics, backgrounds to this new process and energy reconstruction implications will be presented with this increased event sample.

  8. Digital signal processing methods for biosequence comparison.

    PubMed Central

    Benson, D C

    1990-01-01

    A method is discussed for DNA or protein sequence comparison using a finite field fast Fourier transform, a digital signal processing technique; and statistical methods are discussed for analyzing the output of this algorithm. This method compares two sequences of length N in computing time proportional to N log N compared to N2 for methods currently used. This method makes it feasible to compare very long sequences. An example is given to show that the method correctly identifies sites of known homology. PMID:2349096

  9. Software algorithms for false alarm reduction in LWIR hyperspectral chemical agent detection

    NASA Astrophysics Data System (ADS)

    Manolakis, D.; Model, J.; Rossacci, M.; Zhang, D.; Ontiveros, E.; Pieper, M.; Seeley, J.; Weitz, D.

    2008-04-01

    The long-wave infrared (LWIR) hyperpectral sensing modality is one that is often used for the problem of detection and identification of chemical warfare agents (CWA) which apply to both military and civilian situations. The inherent nature and complexity of background clutter dictates a need for sophisticated and robust statistical models which are then used in the design of optimum signal processing algorithms that then provide the best exploitation of hyperspectral data to ultimately make decisions on the absence or presence of potentially harmful CWAs. This paper describes the basic elements of an automated signal processing pipeline developed at MIT Lincoln Laboratory. In addition to describing this signal processing architecture in detail, we briefly describe the key signal models that form the foundation of these algorithms as well as some spatial processing techniques used for false alarm mitigation. Finally, we apply this processing pipeline to real data measured by the Telops FIRST hyperspectral (FIRST) sensor to demonstrate its practical utility for the user community.

  10. Incorporating Multi-criteria Optimization and Uncertainty Analysis in the Model-Based Systems Engineering of an Autonomous Surface Craft

    DTIC Science & Technology

    2009-09-01

    SAS Statistical Analysis Software SE Systems Engineering SEP Systems Engineering Process SHP Shaft Horsepower SIGINT Signals Intelligence......management occurs (OSD 2002). The Systems Engineering Process (SEP), displayed in Figure 2, is a comprehensive , iterative and recursive problem

  11. A sequential method for spline approximation with variable knots. [recursive piecewise polynomial signal processing

    NASA Technical Reports Server (NTRS)

    Mier Muth, A. M.; Willsky, A. S.

    1978-01-01

    In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.

  12. Enhanced echolocation via robust statistics and super-resolution of sonar images

    NASA Astrophysics Data System (ADS)

    Kim, Kio

    Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust statistics is in fusing the images. It is shown that the maximum a posteriori fusion method can be formulated in a Kalman filter-like manner, and also that the resulting expression is identical to a W-estimator with a specific weight function.

  13. Statistical optimisation techniques in fatigue signal editing problem

    NASA Astrophysics Data System (ADS)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.

    2015-02-01

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.

  14. Statistical optimisation techniques in fatigue signal editing problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less

  15. On Certain New Methodology for Reducing Sensor and Readout Electronics Circuitry Noise in Digital Domain

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Miko, Joseph; Bradley, Damon; Heinzen, Katherine

    2008-01-01

    NASA Hubble Space Telescope (HST) and upcoming cosmology science missions carry instruments with multiple focal planes populated with many large sensor detector arrays. These sensors are passively cooled to low temperatures for low-level light (L3) and near-infrared (NIR) signal detection, and the sensor readout electronics circuitry must perform at extremely low noise levels to enable new required science measurements. Because we are at the technological edge of enhanced performance for sensors and readout electronics circuitry, as determined by thermal noise level at given temperature in analog domain, we must find new ways of further compensating for the noise in the signal digital domain. To facilitate this new approach, state-of-the-art sensors are augmented at their array hardware boundaries by non-illuminated reference pixels, which can be used to reduce noise attributed to sensors. There are a few proposed methodologies of processing in the digital domain the information carried by reference pixels, as employed by the Hubble Space Telescope and the James Webb Space Telescope Projects. These methods involve using spatial and temporal statistical parameters derived from boundary reference pixel information to enhance the active (non-reference) pixel signals. To make a step beyond this heritage methodology, we apply the NASA-developed technology known as the Hilbert- Huang Transform Data Processing System (HHT-DPS) for reference pixel information processing and its utilization in reconfigurable hardware on-board a spaceflight instrument or post-processing on the ground. The methodology examines signal processing for a 2-D domain, in which high-variance components of the thermal noise are carried by both active and reference pixels, similar to that in processing of low-voltage differential signals and subtraction of a single analog reference pixel from all active pixels on the sensor. Heritage methods using the aforementioned statistical parameters in the digital domain (such as statistical averaging of the reference pixels themselves) zeroes out the high-variance components, and the counterpart components in the active pixels remain uncorrected. This paper describes how the new methodology was demonstrated through analysis of fast-varying noise components using the Hilbert-Huang Transform Data Processing System tool (HHT-DPS) developed at NASA and the high-level programming language MATLAB (Trademark of MathWorks Inc.), as well as alternative methods for correcting for the high-variance noise component, using an HgCdTe sensor data. The NASA Hubble Space Telescope data post-processing, as well as future deep-space cosmology projects on-board instrument data processing from all the sensor channels, would benefit from this effort.

  16. Chemometrics.

    ERIC Educational Resources Information Center

    Delaney, Michael F.

    1984-01-01

    This literature review on chemometrics (covering December 1981 to December 1983) is organized under these headings: personal supermicrocomputers; education and books; statistics; modeling and parameter estimation; resolution; calibration; signal processing; image analysis; factor analysis; pattern recognition; optimization; artificial…

  17. Wavelet methodology to improve single unit isolation in primary motor cortex cells

    PubMed Central

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A.

    2016-01-01

    The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein’s unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. PMID:25794461

  18. Expert system for testing industrial processes and determining sensor status

    DOEpatents

    Gross, K.C.; Singer, R.M.

    1998-06-02

    A method and system are disclosed for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 24 figs.

  19. Expert system for testing industrial processes and determining sensor status

    DOEpatents

    Gross, Kenneth C.; Singer, Ralph M.

    1998-01-01

    A method and system for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.

  20. Statistical Techniques for Signal Processing

    DTIC Science & Technology

    1993-01-12

    functions and extended influence functions of the associated underlying estimators. An interesting application of the influence function and its...and related filter smtctures. While the influence function is best known for its role in characterizing the robustness of estimators. the mathematical...statistics can be designed and analyzed for performance using the influence function as a tool. In particular, we have examined the mean-median

  1. Statistical properties of filtered pseudorandom digital sequences formed from the sum of maximum-length sequences

    NASA Technical Reports Server (NTRS)

    Wallace, G. R.; Weathers, G. D.; Graf, E. R.

    1973-01-01

    The statistics of filtered pseudorandom digital sequences called hybrid-sum sequences, formed from the modulo-two sum of several maximum-length sequences, are analyzed. The results indicate that a relation exists between the statistics of the filtered sequence and the characteristic polynomials of the component maximum length sequences. An analysis procedure is developed for identifying a large group of sequences with good statistical properties for applications requiring the generation of analog pseudorandom noise. By use of the analysis approach, the filtering process is approximated by the convolution of the sequence with a sum of unit step functions. A parameter reflecting the overall statistical properties of filtered pseudorandom sequences is derived. This parameter is called the statistical quality factor. A computer algorithm to calculate the statistical quality factor for the filtered sequences is presented, and the results for two examples of sequence combinations are included. The analysis reveals that the statistics of the signals generated with the hybrid-sum generator are potentially superior to the statistics of signals generated with maximum-length generators. Furthermore, fewer calculations are required to evaluate the statistics of a large group of hybrid-sum generators than are required to evaluate the statistics of the same size group of approximately equivalent maximum-length sequences.

  2. Statistical 21-cm Signal Separation via Gaussian Process Regression Analysis

    NASA Astrophysics Data System (ADS)

    Mertens, F. G.; Ghosh, A.; Koopmans, L. V. E.

    2018-05-01

    Detecting and characterizing the Epoch of Reionization and Cosmic Dawn via the redshifted 21-cm hyperfine line of neutral hydrogen will revolutionize the study of the formation of the first stars, galaxies, black holes and intergalactic gas in the infant Universe. The wealth of information encoded in this signal is, however, buried under foregrounds that are many orders of magnitude brighter. These must be removed accurately and precisely in order to reveal the feeble 21-cm signal. This requires not only the modeling of the Galactic and extra-galactic emission, but also of the often stochastic residuals due to imperfect calibration of the data caused by ionospheric and instrumental distortions. To stochastically model these effects, we introduce a new method based on `Gaussian Process Regression' (GPR) which is able to statistically separate the 21-cm signal from most of the foregrounds and other contaminants. Using simulated LOFAR-EoR data that include strong instrumental mode-mixing, we show that this method is capable of recovering the 21-cm signal power spectrum across the entire range k = 0.07 - 0.3 {h cMpc^{-1}}. The GPR method is most optimal, having minimal and controllable impact on the 21-cm signal, when the foregrounds are correlated on frequency scales ≳ 3 MHz and the rms of the signal has σ21cm ≳ 0.1 σnoise. This signal separation improves the 21-cm power-spectrum sensitivity by a factor ≳ 3 compared to foreground avoidance strategies and enables the sensitivity of current and future 21-cm instruments such as the Square Kilometre Array to be fully exploited.

  3. An approach to emotion recognition in single-channel EEG signals: a mother child interaction

    NASA Astrophysics Data System (ADS)

    Gómez, A.; Quintero, L.; López, N.; Castro, J.

    2016-04-01

    In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology. Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains. Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness.

  4. Detection of weak signals in memory thermal baths.

    PubMed

    Jiménez-Aquino, J I; Velasco, R M; Romero-Bastida, M

    2014-11-01

    The nonlinear relaxation time and the statistics of the first passage time distribution in connection with the quasideterministic approach are used to detect weak signals in the decay process of the unstable state of a Brownian particle embedded in memory thermal baths. The study is performed in the overdamped approximation of a generalized Langevin equation characterized by an exponential decay in the friction memory kernel. A detection criterion for each time scale is studied: The first one is referred to as the receiver output, which is given as a function of the nonlinear relaxation time, and the second one is related to the statistics of the first passage time distribution.

  5. Dimension from covariance matrices.

    PubMed

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  6. Reducing Speckle In One-Look SAR Images

    NASA Technical Reports Server (NTRS)

    Nathan, K. S.; Curlander, J. C.

    1990-01-01

    Local-adaptive-filter algorithm incorporated into digital processing of synthetic-aperture-radar (SAR) echo data to reduce speckle in resulting imagery. Involves use of image statistics in vicinity of each picture element, in conjunction with original intensity of element, to estimate brightness more nearly proportional to true radar reflectance of corresponding target. Increases ratio of signal to speckle noise without substantial degradation of resolution common to multilook SAR images. Adapts to local variations of statistics within scene, preserving subtle details. Computationally simple. Lends itself to parallel processing of different segments of image, making possible increased throughput.

  7. Estimating error statistics for Chambon-la-Forêt observatory definitive data

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly

    2017-08-01

    We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.

  8. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  9. Probability distributions of molecular observables computed from Markov models. II. Uncertainties in observables and their time-evolution

    NASA Astrophysics Data System (ADS)

    Chodera, John D.; Noé, Frank

    2010-09-01

    Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.

  10. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  11. Generation and optimization of superpixels as image processing kernels for Jones matrix optical coherence tomography

    PubMed Central

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa; Yasuno, Yoshiaki

    2017-01-01

    Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels’ spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels. PMID:29082073

  12. Information integration and diagnosis analysis of equipment status and production quality for machining process

    NASA Astrophysics Data System (ADS)

    Zan, Tao; Wang, Min; Hu, Jianzhong

    2010-12-01

    Machining status monitoring technique by multi-sensors can acquire and analyze the machining process information to implement abnormity diagnosis and fault warning. Statistical quality control technique is normally used to distinguish abnormal fluctuations from normal fluctuations through statistical method. In this paper by comparing the advantages and disadvantages of the two methods, the necessity and feasibility of integration and fusion is introduced. Then an approach that integrates multi-sensors status monitoring and statistical process control based on artificial intelligent technique, internet technique and database technique is brought forward. Based on virtual instrument technique the author developed the machining quality assurance system - MoniSysOnline, which has been used to monitoring the grinding machining process. By analyzing the quality data and AE signal information of wheel dressing process the reason of machining quality fluctuation has been obtained. The experiment result indicates that the approach is suitable for the status monitoring and analyzing of machining process.

  13. Calibrated Noise Measurements with Induced Receiver Gain Fluctuations

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly

    2011-01-01

    The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.

  14. Models and signal processing for an implanted ethanol bio-sensor.

    PubMed

    Han, Jae-Joon; Doerschuk, Peter C; Gelfand, Saul B; O'Connor, Sean J

    2008-02-01

    The understanding of drinking patterns leading to alcoholism has been hindered by an inability to unobtrusively measure ethanol consumption over periods of weeks to months in the community environment. An implantable ethanol sensor is under development using microelectromechanical systems technology. For safety and user acceptability issues, the sensor will be implanted subcutaneously and, therefore, measure peripheral-tissue ethanol concentration. Determining ethanol consumption and kinetics in other compartments from the time course of peripheral-tissue ethanol concentration requires sophisticated signal processing based on detailed descriptions of the relevant physiology. A statistical signal processing system based on detailed models of the physiology and using extended Kalman filtering and dynamic programming tools is described which can estimate the time series of ethanol concentration in blood, liver, and peripheral tissue and the time series of ethanol consumption based on peripheral-tissue ethanol concentration measurements.

  15. Ex vivo accuracy of an apex locator using digital signal processing in primary teeth.

    PubMed

    Leonardo, Mário Roberto; da Silva, Lea Assed Bezerra; Nelson-Filho, Paulo; da Silva, Raquel Assed Bezerra; Lucisano, Marília Pacífico

    2009-01-01

    The purpose of this study was to evaluate ex vivo the accuracy an electronic apex locator during root canal length determination in primary molars. One calibrated examiner determined the root canal length in 15 primary molars (total=34 root canals) with different stages of root resorption. Root canal length was measured both visually with the placement of a K-file 1 mm short of the apical foramen or the apical resorption bevel, and electronically using an electronic apex locator (Digital Signal Processing). Data were analyzed statistically using the intraclass correlation (ICC) test. Comparing the actual and electronic root canal length measurements in the primary teeth showed a high correlation (ICC=0.95). The Digital Signal Processing apex locator is useful and accurate for apex foramen location during root canal length measurement in primary molars.

  16. A silicon avalanche photodiode detector circuit for Nd:YAG laser scattering

    NASA Astrophysics Data System (ADS)

    Hsieh, C.-L.; Haskovec, J.; Carlstrom, T. N.; Deboo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.

    1990-06-01

    A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge sensitive preamplifier was developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N = 1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low frequency background light component. The background signal is amplified with a computer controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Z sub eff measurements of the plasma. The signal processing was analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.

  17. Silicon avalanche photodiode detector circuit for Nd:YAG laser scattering

    NASA Astrophysics Data System (ADS)

    Hsieh, C. L.; Haskovec, J.; Carlstrom, T. N.; DeBoo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.

    1990-10-01

    A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature-controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge-sensitive preamplifier has been developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N=1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low-frequency background light component. The background signal is amplified with a computer-controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Zeff measurements of the plasma. The signal processing has been analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.

  18. Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.

    PubMed

    Zhou, Weidong; Gotman, Jean

    2004-01-01

    In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.

  19. NeuroPigPen: A Scalable Toolkit for Processing Electrophysiological Signal Data in Neuroscience Applications Using Apache Pig

    PubMed Central

    Sahoo, Satya S.; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A.; Lhatoo, Samden D.

    2016-01-01

    The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This “neuroscience Big data” represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability—the ability to efficiently process increasing volumes of data; (b) Adaptability—the toolkit can be deployed across different computing configurations; and (c) Ease of programming—the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit is highly scalable and adaptable, which makes it suitable for use in neuroscience applications as a scalable data processing toolkit. As part of the ongoing extension of NeuroPigPen, we are developing new modules to support statistical functions to analyze signal data for brain connectivity research. In addition, the toolkit is being extended to allow integration with scientific workflow systems. NeuroPigPen is released under BSD license at: https://sites.google.com/a/case.edu/neuropigpen/. PMID:27375472

  20. NeuroPigPen: A Scalable Toolkit for Processing Electrophysiological Signal Data in Neuroscience Applications Using Apache Pig.

    PubMed

    Sahoo, Satya S; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A; Lhatoo, Samden D

    2016-01-01

    The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This "neuroscience Big data" represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability-the ability to efficiently process increasing volumes of data; (b) Adaptability-the toolkit can be deployed across different computing configurations; and (c) Ease of programming-the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit is highly scalable and adaptable, which makes it suitable for use in neuroscience applications as a scalable data processing toolkit. As part of the ongoing extension of NeuroPigPen, we are developing new modules to support statistical functions to analyze signal data for brain connectivity research. In addition, the toolkit is being extended to allow integration with scientific workflow systems. NeuroPigPen is released under BSD license at: https://sites.google.com/a/case.edu/neuropigpen/.

  1. Statistical Analysis of Adaptive Beam-Forming Methods

    DTIC Science & Technology

    1988-05-01

    minimum amount of computing resources? * What are the tradeoffs being made when a system design selects block averaging over exponential averaging? Will...understood by many signal processing practitioners, however, is how system parameters and the number of sensors effect the distribution of the... system performance improve and if so by how much? b " It is well known that the noise sampled at adjacent sensors is not statistically independent

  2. Statistical ultrasonics: the influence of Robert F. Wagner

    NASA Astrophysics Data System (ADS)

    Insana, Michael F.

    2009-02-01

    An important ongoing question for higher education is how to successfully mentor the next generation of scientists and engineers. It has been my privilege to have been mentored by one of the best, Dr Robert F. Wagner and his colleagues at the CDRH/FDA during the mid 1980s. Bob introduced many of us in medical ultrasonics to statistical imaging techniques. These ideas continue to broadly influence studies on adaptive aperture management (beamforming, speckle suppression, compounding), tissue characterization (texture features, Rayleigh/Rician statistics, scatterer size and number density estimators), and fundamental questions about how limitations of the human eye-brain system for extracting information from textured images can motivate image processing. He adapted the classical techniques of signal detection theory to coherent imaging systems that, for the first time in ultrasonics, related common engineering metrics for image quality to task-based clinical performance. This talk summarizes my wonderfully-exciting three years with Bob as I watched him explore topics in statistical image analysis that formed a rational basis for many of the signal processing techniques used in commercial systems today. It is a story of an exciting time in medical ultrasonics, and of how a sparkling personality guided and motivated the development of junior scientists who flocked around him in admiration and amazement.

  3. 75 FR 56059 - Patent Examiner Technical Training Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-15

    ...); statistical methods in validation of microarry data; personalized medicine, manufacture of carbon nanospheres... processing, growing monocrystals, hydrogen production, liquid and gas purification and separation, making... Systems and Components: Mixed signal design and architecture, flexible displays, OLED display technology...

  4. Environmental statistics and optimal regulation

    NASA Astrophysics Data System (ADS)

    Sivak, David; Thomson, Matt

    2015-03-01

    The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.

  5. Wavelet methodology to improve single unit isolation in primary motor cortex cells.

    PubMed

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A

    2015-05-15

    The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein's unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. Copyright © 2015. Published by Elsevier B.V.

  6. Signal-Processing Algorithm Development for the ACLAIM Sensor

    NASA Technical Reports Server (NTRS)

    vonLaven, Scott

    1995-01-01

    Methods for further minimizing the risk by making use of previous lidar observations were investigated. EOFs are likely to play an important role in these methods, and a procedure for extracting EOFs from data has been implemented, The new processing methods involving EOFs could range from extrapolation, as discussed, to more complicated statistical procedures for maintaining low unstart risk.

  7. Age and experience shape developmental changes in the neural basis of language-related learning.

    PubMed

    McNealy, Kristin; Mazziotta, John C; Dapretto, Mirella

    2011-11-01

    Very little is known about the neural underpinnings of language learning across the lifespan and how these might be modified by maturational and experiential factors. Building on behavioral research highlighting the importance of early word segmentation (i.e. the detection of word boundaries in continuous speech) for subsequent language learning, here we characterize developmental changes in brain activity as this process occurs online, using data collected in a mixed cross-sectional and longitudinal design. One hundred and fifty-six participants, ranging from age 5 to adulthood, underwent functional magnetic resonance imaging (fMRI) while listening to three novel streams of continuous speech, which contained either strong statistical regularities, strong statistical regularities and speech cues, or weak statistical regularities providing minimal cues to word boundaries. All age groups displayed significant signal increases over time in temporal cortices for the streams with high statistical regularities; however, we observed a significant right-to-left shift in the laterality of these learning-related increases with age. Interestingly, only the 5- to 10-year-old children displayed significant signal increases for the stream with low statistical regularities, suggesting an age-related decrease in sensitivity to more subtle statistical cues. Further, in a sample of 78 10-year-olds, we examined the impact of proficiency in a second language and level of pubertal development on learning-related signal increases, showing that the brain regions involved in language learning are influenced by both experiential and maturational factors. 2011 Blackwell Publishing Ltd.

  8. Noise-assisted data processing with empirical mode decomposition in biomedical signals.

    PubMed

    Karagiannis, Alexandros; Constantinou, Philip

    2011-01-01

    In this paper, a methodology is described in order to investigate the performance of empirical mode decomposition (EMD) in biomedical signals, and especially in the case of electrocardiogram (ECG). Synthetic ECG signals corrupted with white Gaussian noise are employed and time series of various lengths are processed with EMD in order to extract the intrinsic mode functions (IMFs). A statistical significance test is implemented for the identification of IMFs with high-level noise components and their exclusion from denoising procedures. Simulation campaign results reveal that a decrease of processing time is accomplished with the introduction of preprocessing stage, prior to the application of EMD in biomedical time series. Furthermore, the variation in the number of IMFs according to the type of the preprocessing stage is studied as a function of SNR and time-series length. The application of the methodology in MIT-BIH ECG records is also presented in order to verify the findings in real ECG signals.

  9. Coactivation of response initiation processes with redundant signals.

    PubMed

    Maslovat, Dana; Hajj, Joëlle; Carlsen, Anthony N

    2018-05-14

    During reaction time (RT) tasks, participants respond faster to multiple stimuli from different modalities as compared to a single stimulus, a phenomenon known as the redundant signal effect (RSE). Explanations for this effect typically include coactivation arising from the multiple stimuli, which results in enhanced processing of one or more response production stages. The current study compared empirical RT data with the predictions of a model in which initiation-related activation arising from each stimulus is additive. Participants performed a simple wrist extension RT task following either a visual go-signal, an auditory go-signal, or both stimuli with the auditory stimulus delayed between 0 and 125 ms relative to the visual stimulus. Results showed statistical equivalence between the predictions of an additive initiation model and the observed RT data, providing novel evidence that the RSE can be explained via a coactivation of initiation-related processes. It is speculated that activation summation occurs at the thalamus, leading to the observed facilitation of response initiation. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Amplitude and Phase Characteristics of Signals at the Output of Spatially Separated Antennas for Paths with Scattering

    NASA Astrophysics Data System (ADS)

    Anikin, A. S.

    2018-06-01

    Conditional statistical characteristics of the phase difference are considered depending on the ratio of instantaneous output signal amplitudes of spatially separated weakly directional antennas for the normal field model for paths with radio-wave scattering. The dependences obtained are related to the physical processes on the radio-wave propagation path. The normal model parameters are established at which the statistical characteristics of the phase difference depend on the ratio of the instantaneous amplitudes and hence can be used to measure the phase difference. Using Shannon's formula, the amount of information on the phase difference of signals contained in the ratio of their amplitudes is calculated depending on the parameters of the normal field model. Approaches are suggested to reduce the shift of phase difference measured for paths with radio-wave scattering. A comparison with results of computer simulation by the Monte Carlo method is performed.

  11. Automatic detection of slight parameter changes associated to complex biomedical signals using multiresolution q-entropy1.

    PubMed

    Torres, M E; Añino, M M; Schlotthauer, G

    2003-12-01

    It is well known that, from a dynamical point of view, sudden variations in physiological parameters which govern certain diseases can cause qualitative changes in the dynamics of the corresponding physiological process. The purpose of this paper is to introduce a technique that allows the automated temporal localization of slight changes in a parameter of the law that governs the nonlinear dynamics of a given signal. This tool takes, from the multiresolution entropies, the ability to show these changes as statistical variations at each scale. These variations are held in the corresponding principal component. Appropriately combining these techniques with a statistical changes detector, a complexity change detection algorithm is obtained. The relevance of the approach, together with its robustness in the presence of moderate noise, is discussed in numerical simulations and the automatic detector is applied to real and simulated biological signals.

  12. Multivariate analysis techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bendavid, Josh; Fisher, Wade C.; Junk, Thomas R.

    2016-01-01

    The end products of experimental data analysis are designed to be simple and easy to understand: hypothesis tests and measurements of parameters. But, the experimental data themselves are voluminous and complex. Furthermore, in modern collider experiments, many petabytes of data must be processed in search of rare new processes which occur together with much more copious background processes that are of less interest to the task at hand. The systematic uncertainties on the background may be larger than the expected signal in many cases. The statistical power of an analysis and its sensitivity to systematic uncertainty can therefore usually bothmore » be improved by separating signal events from background events with higher efficiency and purity.« less

  13. Fault Detection and Diagnosis In Hall-Héroult Cells Based on Individual Anode Current Measurements Using Dynamic Kernel PCA

    NASA Astrophysics Data System (ADS)

    Yao, Yuchen; Bao, Jie; Skyllas-Kazacos, Maria; Welch, Barry J.; Akhmetov, Sergey

    2018-04-01

    Individual anode current signals in aluminum reduction cells provide localized cell conditions in the vicinity of each anode, which contain more information than the conventionally measured cell voltage and line current. One common use of this measurement is to identify process faults that can cause significant changes in the anode current signals. While this method is simple and direct, it ignores the interactions between anode currents and other important process variables. This paper presents an approach that applies multivariate statistical analysis techniques to individual anode currents and other process operating data, for the detection and diagnosis of local process abnormalities in aluminum reduction cells. Specifically, since the Hall-Héroult process is time-varying with its process variables dynamically and nonlinearly correlated, dynamic kernel principal component analysis with moving windows is used. The cell is discretized into a number of subsystems, with each subsystem representing one anode and cell conditions in its vicinity. The fault associated with each subsystem is identified based on multivariate statistical control charts. The results show that the proposed approach is able to not only effectively pinpoint the problematic areas in the cell, but also assess the effect of the fault on different parts of the cell.

  14. A new statistical PCA-ICA algorithm for location of R-peaks in ECG.

    PubMed

    Chawla, M P S; Verma, H K; Kumar, Vinod

    2008-09-16

    The success of ICA to separate the independent components from the mixture depends on the properties of the electrocardiogram (ECG) recordings. This paper discusses some of the conditions of independent component analysis (ICA) that could affect the reliability of the separation and evaluation of issues related to the properties of the signals and number of sources. Principal component analysis (PCA) scatter plots are plotted to indicate the diagnostic features in the presence and absence of base-line wander in interpreting the ECG signals. In this analysis, a newly developed statistical algorithm by authors, based on the use of combined PCA-ICA for two correlated channels of 12-channel ECG data is proposed. ICA technique has been successfully implemented in identifying and removal of noise and artifacts from ECG signals. Cleaned ECG signals are obtained using statistical measures like kurtosis and variance of variance after ICA processing. This analysis also paper deals with the detection of QRS complexes in electrocardiograms using combined PCA-ICA algorithm. The efficacy of the combined PCA-ICA algorithm lies in the fact that the location of the R-peaks is bounded from above and below by the location of the cross-over points, hence none of the peaks are ignored or missed.

  15. A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.

    PubMed

    Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W

    2005-01-01

    We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.

  16. Output statistics of laser anemometers in sparsely seeded flows

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Jensen, A. S.

    1982-01-01

    It is noted that until very recently, research on this topic concentrated on the particle arrival statistics and the influence of the optical parameters on them. Little attention has been paid to the influence of subsequent processing on the measurement statistics. There is also controversy over whether the effects of the particle statistics can be measured. It is shown here that some of the confusion derives from a lack of understanding of the experimental parameters that are to be controlled or known. A rigorous framework is presented for examining the measurement statistics of such systems. To provide examples, two problems are then addressed. The first has to do with a sample and hold processor, the second with what is called a saturable processor. The sample and hold processor converts the output to a continuous signal by holding the last reading until a new one is obtained. The saturable system is one where the maximum processable rate is arrived at by the dead time of some unit in the system. At high particle rates, the processed rate is determined through the dead time.

  17. Statistical learning in songbirds: from self-tutoring to song culture.

    PubMed

    Fehér, Olga; Ljubičić, Iva; Suzuki, Kenta; Okanoya, Kazuo; Tchernichovski, Ofer

    2017-01-05

    At the onset of vocal development, both songbirds and humans produce variable vocal babbling with broadly distributed acoustic features. Over development, these vocalizations differentiate into the well-defined, categorical signals that characterize adult vocal behaviour. A broadly distributed signal is ideal for vocal exploration, that is, for matching vocal production to the statistics of the sensory input. The developmental transition to categorical signals is a gradual process during which the vocal output becomes differentiated and stable. But does it require categorical input? We trained juvenile zebra finches with playbacks of their own developing song, produced just a few moments earlier, updated continuously over development. Although the vocalizations of these self-tutored (ST) birds were initially broadly distributed, birds quickly developed categorical signals, as fast as birds that were trained with a categorical, adult song template. By contrast, siblings of those birds that received no training (isolates) developed phonological categories much more slowly and never reached the same level of category differentiation as their ST brothers. Therefore, instead of simply mirroring the statistical properties of their sensory input, songbirds actively transform it into distinct categories. We suggest that the early self-generation of phonological categories facilitates the establishment of vocal culture by making the song easier to transmit at the micro level, while promoting stability of shared vocabulary at the group level over generations.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Authors.

  18. Coherent Change Detection: Theoretical Description and Experimental Results

    DTIC Science & Technology

    2006-08-01

    373–376. 47. S. M. Kay, Fundamentals of statistic signal processing. Vol 2. Detection theory. Pren- tice Hall, 1998. 48. H . Anton and C . Rorres ...Proceedings of the 1998 International Geoscience and Remote Sensing Symposium, vol. 5, 1998, pp. 2451–2453. 9. C . V. Jakowatz Jr., D. E. Wahl, P. H . Eichel...dissertation, School of Electrical and Electronic Engineering, The University of Adelaide, 2004. 20. C . H . Gierull, Statistics of SAR interferograms

  19. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  20. An energy- and depth-dependent model for x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallas, Brandon D.; Boswell, Jonathan S.; Badano, Aldo

    In this paper, we model an x-ray imaging system, paying special attention to the energy- and depth-dependent characteristics of the inputs and interactions: x rays are polychromatic, interaction depth and conversion to optical photons is energy-dependent, optical scattering and the collection efficiency depend on the depth of interaction. The model we construct is a random function of the point process that begins with the distribution of x rays incident on the phosphor and ends with optical photons being detected by the active area of detector pixels to form an image. We show how the point-process representation can be used tomore » calculate the characteristic statistics of the model. We then simulate a Gd{sub 2}O{sub 2}S:Tb phosphor, estimate its characteristic statistics, and proceed with a signal-detection experiment to investigate the impact of the pixel fill factor on detecting spherical calcifications (the signal). The two extremes possible from this experiment are that SNR{sup 2} does not change with fill factor or changes in proportion to fill factor. In our results, the impact of fill factor is between these extremes, and depends on the diameter of the signal.« less

  1. Derivation from first principles of the statistical distribution of the mass peak intensities of MS data.

    PubMed

    Ipsen, Andreas

    2015-02-03

    Despite the widespread use of mass spectrometry (MS) in a broad range of disciplines, the nature of MS data remains very poorly understood, and this places important constraints on the quality of MS data analysis as well as on the effectiveness of MS instrument design. In the following, a procedure for calculating the statistical distribution of the mass peak intensity for MS instruments that use analog-to-digital converters (ADCs) and electron multipliers is presented. It is demonstrated that the physical processes underlying the data-generation process, from the generation of the ions to the signal induced at the detector, and on to the digitization of the resulting voltage pulse, result in data that can be well-approximated by a Gaussian distribution whose mean and variance are determined by physically meaningful instrumental parameters. This allows for a very precise understanding of the signal-to-noise ratio of mass peak intensities and suggests novel ways of improving it. Moreover, it is a prerequisite for being able to address virtually all data analytical problems in downstream analyses in a statistically rigorous manner. The model is validated with experimental data.

  2. Enhancement of MS Signal Processing For Improved Cancer Biomarker Discovery

    NASA Astrophysics Data System (ADS)

    Si, Qian

    Technological advances in proteomics have shown great potential in detecting cancer at the earliest stages. One way is to use the time of flight mass spectroscopy to identify biomarkers, or early disease indicators related to the cancer. Pattern analysis of time of flight mass spectra data from blood and tissue samples gives great hope for the identification of potential biomarkers among the complex mixture of biological and chemical samples for the early cancer detection. One of the keys issues is the pre-processing of raw mass spectra data. A lot of challenges need to be addressed: unknown noise character associated with the large volume of data, high variability in the mass spectroscopy measurements, and poorly understood signal background and so on. This dissertation focuses on developing statistical algorithms and creating data mining tools for computationally improved signal processing for mass spectrometry data. I have introduced an advanced accurate estimate of the noise model and a half-supervised method of mass spectrum data processing which requires little knowledge about the data.

  3. Information theoretical assessment of image gathering and coding for digital restoration

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Reichenbach, Stephen E.

    1990-01-01

    The process of image-gathering, coding, and restoration is presently treated in its entirety rather than as a catenation of isolated tasks, on the basis of the relationship between the spectral information density of a transmitted signal and the restorability of images from the signal. This 'information-theoretic' assessment accounts for the information density and efficiency of the acquired signal as a function of the image-gathering system's design and radiance-field statistics, as well as for the information efficiency and data compression that are obtainable through the combination of image gathering with coding to reduce signal redundancy. It is found that high information efficiency is achievable only through minimization of image-gathering degradation as well as signal redundancy.

  4. The effect of solar radio bursts on the GNSS radio occultation signals

    NASA Astrophysics Data System (ADS)

    Yue, Xinan; Schreiner, William S.; Kuo, Ying-Hwa; Zhao, Biqiang; Wan, Weixing; Ren, Zhipeng; Liu, Libo; Wei, Yong; Lei, Jiuhou; Solomon, Stan; Rocken, Christian

    2013-09-01

    radio burst (SRB) is the radio wave emission after a solar flare, covering a broad frequency range, originated from the Sun's atmosphere. During the SRB occurrence, some specific frequency radio wave could interfere with the Global Navigation Satellite System (GNSS) signals and therefore disturb the received signals. In this study, the low Earth orbit- (LEO-) based high-resolution GNSS radio occultation (RO) signals from multiple satellites (COSMIC, CHAMP, GRACE, SAC-C, Metop-A, and TerraSAR-X) processed in University Corporation for Atmospheric Research (UCAR) were first used to evaluate the effect of SRB on the RO technique. The radio solar telescope network (RSTN) observed radio flux was used to represent SRB occurrence. An extreme case during 6 December 2006 and statistical analysis during April 2006 to September 2012 were studied. The LEO RO signals show frequent loss of lock (LOL), simultaneous decrease on L1 and L2 signal-to-noise ratio (SNR) globally during daytime, small-scale perturbations of SNR, and decreased successful retrieval percentage (SRP) for both ionospheric and atmospheric occultations during SRB occurrence. A potential harmonic band interference was identified. Either decreased data volume or data quality will influence weather prediction, climate study, and space weather monitoring by using RO data during SRB time. Statistically, the SRP of ionospheric and atmospheric occultation retrieval shows ~4% and ~13% decrease, respectively, while the SNR of L1 and L2 show ~5.7% and ~11.7% decrease, respectively. A threshold value of ~1807 SFU of 1415 MHz frequency, which can result in observable GNSS SNR decrease, was derived based on our statistical analysis.

  5. Optimum Array Processing for Detecting Binary Signals Corrupted by Directional Interference.

    DTIC Science & Technology

    1972-12-01

    specific cases. Two different series representations of a vector random process are discussed in Van Trees [3]. These two methods both require the... spaci ~ng d, etc.) its detection error represents a lower bound for the performance that might be obtained with other types of array processing (such...Middleton, Introduction to Statistical Communication Theory, New York: McGraw-Hill, 1960. 3. H.L. Van Trees , Detection, Estimation, and Modulation Theory

  6. Investigation of the photon statistics of parametric fluorescence in a traveling-wave parametric amplifier by means of self-homodyne tomography.

    PubMed

    Vasilyev, M; Choi, S K; Kumar, P; D'Ariano, G M

    1998-09-01

    Photon-number distributions for parametric fluorescence from a nondegenerate optical parametric amplifier are measured with a novel self-homodyne technique. These distributions exhibit the thermal-state character predicted by theory. However, a difference between the fluorescence gain and the signal gain of the parametric amplifier is observed. We attribute this difference to a change in the signal-beam profile during the traveling-wave pulsed amplification process.

  7. Real-Time Noise Reduction for Mossbauer Spectroscopy through Online Implementation of a Modified Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abrecht, David G.; Schwantes, Jon M.; Kukkadapu, Ravi K.

    2015-02-01

    Spectrum-processing software that incorporates a gaussian smoothing kernel within the statistics of first-order Kalman filtration has been developed to provide cross-channel spectral noise reduction for increased real-time signal-to-noise ratios for Mossbauer spectroscopy. The filter was optimized for the breadth of the gaussian using the Mossbauer spectrum of natural iron foil, and comparisons between the peak broadening, signal-to-noise ratios, and shifts in the calculated hyperfine parameters are presented. The results of optimization give a maximum improvement in the signal-to-noise ratio of 51.1% over the unfiltered spectrum at a gaussian breadth of 27 channels, or 2.5% of the total spectrum width. Themore » full-width half-maximum of the spectrum peaks showed an increase of 19.6% at this optimum point, indicating a relatively weak increase in the peak broadening relative to the signal enhancement, leading to an overall increase in the observable signal. Calculations of the hyperfine parameters showed no statistically significant deviations were introduced from the application of the filter, confirming the utility of this filter for spectroscopy applications.« less

  8. Statistical process control using optimized neural networks: a case study.

    PubMed

    Addeh, Jalil; Ebrahimzadeh, Ata; Azarbad, Milad; Ranaee, Vahid

    2014-09-01

    The most common statistical process control (SPC) tools employed for monitoring process changes are control charts. A control chart demonstrates that the process has altered by generating an out-of-control signal. This study investigates the design of an accurate system for the control chart patterns (CCPs) recognition in two aspects. First, an efficient system is introduced that includes two main modules: feature extraction module and classifier module. In the feature extraction module, a proper set of shape features and statistical feature are proposed as the efficient characteristics of the patterns. In the classifier module, several neural networks, such as multilayer perceptron, probabilistic neural network and radial basis function are investigated. Based on an experimental study, the best classifier is chosen in order to recognize the CCPs. Second, a hybrid heuristic recognition system is introduced based on cuckoo optimization algorithm (COA) algorithm to improve the generalization performance of the classifier. The simulation results show that the proposed algorithm has high recognition accuracy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Statistical Signal Processing for Remote Sensing of Targets: Proposal for Terrestrial Science Program

    DTIC Science & Technology

    2014-11-14

    responses from any analyte under consideration. Figure 1 illustrates this behavior. Figure 1: LIBS spectra from OVA (ricin simulant) on...illustrates this behavior. Figure 1: LIBS spectra from OVA (ricin simulant) on several different substrates: steel, aluminum, and polycarbonate

  10. Improvement of statistical methods for detecting anomalies in climate and environmental monitoring systems

    NASA Astrophysics Data System (ADS)

    Yakunin, A. G.; Hussein, H. M.

    2018-01-01

    The article shows how the known statistical methods, which are widely used in solving financial problems and a number of other fields of science and technology, can be effectively applied after minor modification for solving such problems in climate and environment monitoring systems, as the detection of anomalies in the form of abrupt changes in signal levels, the occurrence of positive and negative outliers and the violation of the cycle form in periodic processes.

  11. Classifiers utilized to enhance acoustic based sensors to identify round types of artillery/mortar

    NASA Astrophysics Data System (ADS)

    Grasing, David; Desai, Sachi; Morcos, Amir

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feedforward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  12. Artillery/mortar type classification based on detected acoustic transients

    NASA Astrophysics Data System (ADS)

    Morcos, Amir; Grasing, David; Desai, Sachi

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  13. Artillery/mortar round type classification to increase system situational awareness

    NASA Astrophysics Data System (ADS)

    Desai, Sachi; Grasing, David; Morcos, Amir; Hohil, Myron

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feedforward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  14. Digital test signal generation: An accurate SNR calibration approach for the DSN

    NASA Technical Reports Server (NTRS)

    Gutierrez-Luaces, Benito O.

    1993-01-01

    In support of the on-going automation of the Deep Space Network (DSN) a new method of generating analog test signals with accurate signal-to-noise ratio (SNR) is described. High accuracy is obtained by simultaneous generation of digital noise and signal spectra at the desired bandwidth (base-band or bandpass). The digital synthesis provides a test signal embedded in noise with the statistical properties of a stationary random process. Accuracy is dependent on test integration time and limited only by the system quantization noise (0.02 dB). The monitor and control as well as signal-processing programs reside in a personal computer (PC). Commands are transmitted to properly configure the specially designed high-speed digital hardware. The prototype can generate either two data channels modulated or not on a subcarrier, or one QPSK channel, or a residual carrier with one biphase data channel. The analog spectrum generated is on the DC to 10 MHz frequency range. These spectra may be up-converted to any desired frequency without loss on the characteristics of the SNR provided. Test results are presented.

  15. UWB communication receiver feedback loop

    DOEpatents

    Spiridon, Alex; Benzel, Dave; Dowla, Farid U.; Nekoogar, Faranak; Rosenbury, Erwin T.

    2007-12-04

    A novel technique and structure that maximizes the extraction of information from reference pulses for UWB-TR receivers is introduced. The scheme efficiently processes an incoming signal to suppress different types of UWB as well as non-UWB interference prior to signal detection. Such a method and system adds a feedback loop mechanism to enhance the signal-to-noise ratio of reference pulses in a conventional TR receiver. Moreover, sampling the second order statistical function such as, for example, the autocorrelation function (ACF) of the received signal and matching it to the ACF samples of the original pulses for each transmitted bit provides a more robust UWB communications method and system in the presence of channel distortions.

  16. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  17. Chaotic oscillations and noise transformations in a simple dissipative system with delayed feedback

    NASA Astrophysics Data System (ADS)

    Zverev, V. V.; Rubinstein, B. Ya.

    1991-04-01

    We analyze the statistical behavior of signals in nonlinear circuits with delayed feedback in the presence of external Markovian noise. For the special class of circuits with intense phase mixing we develop an approach for the computation of the probability distributions and multitime correlation functions based on the random phase approximation. Both Gaussian and Kubo-Andersen models of external noise statistics are analyzed and the existence of the stationary (asymptotic) random process in the long-time limit is shown. We demonstrate that a nonlinear system with chaotic behavior becomes a noise amplifier with specific statistical transformation properties.

  18. The bag-of-frames approach to audio pattern recognition: a sufficient model for urban soundscapes but not for polyphonic music.

    PubMed

    Aucouturier, Jean-Julien; Defreville, Boris; Pachet, François

    2007-08-01

    The "bag-of-frames" approach (BOF) to audio pattern recognition represents signals as the long-term statistical distribution of their local spectral features. This approach has proved nearly optimal for simulating the auditory perception of natural and human environments (or soundscapes), and is also the most predominent paradigm to extract high-level descriptions from music signals. However, recent studies show that, contrary to its application to soundscape signals, BOF only provides limited performance when applied to polyphonic music signals. This paper proposes to explicitly examine the difference between urban soundscapes and polyphonic music with respect to their modeling with the BOF approach. First, the application of the same measure of acoustic similarity on both soundscape and music data sets confirms that the BOF approach can model soundscapes to near-perfect precision, and exhibits none of the limitations observed in the music data set. Second, the modification of this measure by two custom homogeneity transforms reveals critical differences in the temporal and statistical structure of the typical frame distribution of each type of signal. Such differences may explain the uneven performance of BOF algorithms on soundscapes and music signals, and suggest that their human perception rely on cognitive processes of a different nature.

  19. Mathematical Sciences Division 1992 Programs

    DTIC Science & Technology

    1992-10-01

    statistical theory that underlies modern signal analysis . There is a strong emphasis on stochastic processes and time series , particularly those which...include optimal resource planning and real- time scheduling of stochastic shop-floor processes. Scheduling systems will be developed that can adapt to...make forecasts for the length-of-service time series . Protocol analysis of these sessions will be used to idenify relevant contextual features and to

  20. Statistical methods for change-point detection in surface temperature records

    NASA Astrophysics Data System (ADS)

    Pintar, A. L.; Possolo, A.; Zhang, N. F.

    2013-09-01

    We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.

  1. How Do You Determine Whether The Earth Is Warming Up?

    NASA Astrophysics Data System (ADS)

    Restrepo, J. M.; Comeau, D.; Flaschka, H.

    2012-12-01

    How does one determine whether the extreme summer temperatures in the North East of the US, or in Moscow during the summer of 2010, was an extreme weather fluctuation or the result of a systematic global climate warming trend? It is only under exceptional circumstances that one can determine whether an observational climate signal belongs to a particular statistical distribution. In fact, observed climate signals are rarely "statistical" and thus there is usually no way to rigorously obtain enough field data to produce a trend or tendency, based upon data alone. Furthermore, this type of data is often multi-scale. We propose a trend or tendency methodology that does not make use of a parametric or a statistical assumption. The most important feature of this trend strategy is that it is defined in very precise mathematical terms. The tendency is easily understood and practical, and its algorithmic realization is fairly robust. In addition to proposing a trend, the methodology can be adopted to generate surrogate statistical models, useful in reduced filtering schemes of time dependent processes.

  2. Novel public health risk assessment process developed to support syndromic surveillance for the 2012 Olympic and Paralympic Games.

    PubMed

    Smith, Gillian E; Elliot, Alex J; Ibbotson, Sue; Morbey, Roger; Edeghere, Obaghe; Hawker, Jeremy; Catchpole, Mike; Endericks, Tina; Fisher, Paul; McCloskey, Brian

    2017-09-01

    Syndromic surveillance aims to provide early warning and real time estimates of the extent of incidents; and reassurance about lack of impact of mass gatherings. We describe a novel public health risk assessment process to ensure those leading the response to the 2012 Olympic Games were alerted to unusual activity that was of potential public health importance, and not inundated with multiple statistical 'alarms'. Statistical alarms were assessed to identify those which needed to result in 'alerts' as reliably as possible. There was no previously developed method for this. We identified factors that increased our concern about an alarm suggesting that an 'alert' should be made. Between 2 July and 12 September 2012, 350 674 signals were analysed resulting in 4118 statistical alarms. Using the risk assessment process, 122 'alerts' were communicated to Olympic incident directors. Use of a novel risk assessment process enabled the interpretation of large number of statistical alarms in a manageable way for the period of a sustained mass gathering. This risk assessment process guided the prioritization and could be readily adapted to other surveillance systems. The process, which is novel to our knowledge, continues as a legacy of the Games. © Crown copyright 2016.

  3. Automated diagnosis of congestive heart failure using dual tree complex wavelet transform and statistical features extracted from 2s of ECG signals.

    PubMed

    Sudarshan, Vidya K; Acharya, U Rajendra; Oh, Shu Lih; Adam, Muhammad; Tan, Jen Hong; Chua, Chua Kuang; Chua, Kok Poo; Tan, Ru San

    2017-04-01

    Identification of alarming features in the electrocardiogram (ECG) signal is extremely significant for the prediction of congestive heart failure (CHF). ECG signal analysis carried out using computer-aided techniques can speed up the diagnosis process and aid in the proper management of CHF patients. Therefore, in this work, dual tree complex wavelets transform (DTCWT)-based methodology is proposed for an automated identification of ECG signals exhibiting CHF from normal. In the experiment, we have performed a DTCWT on ECG segments of 2s duration up to six levels to obtain the coefficients. From these DTCWT coefficients, statistical features are extracted and ranked using Bhattacharyya, entropy, minimum redundancy maximum relevance (mRMR), receiver-operating characteristics (ROC), Wilcoxon, t-test and reliefF methods. Ranked features are subjected to k-nearest neighbor (KNN) and decision tree (DT) classifiers for automated differentiation of CHF and normal ECG signals. We have achieved 99.86% accuracy, 99.78% sensitivity and 99.94% specificity in the identification of CHF affected ECG signals using 45 features. The proposed method is able to detect CHF patients accurately using only 2s of ECG signal length and hence providing sufficient time for the clinicians to further investigate on the severity of CHF and treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Normalization, bias correction, and peak calling for ChIP-seq

    PubMed Central

    Diaz, Aaron; Park, Kiyoub; Lim, Daniel A.; Song, Jun S.

    2012-01-01

    Next-generation sequencing is rapidly transforming our ability to profile the transcriptional, genetic, and epigenetic states of a cell. In particular, sequencing DNA from the immunoprecipitation of protein-DNA complexes (ChIP-seq) and methylated DNA (MeDIP-seq) can reveal the locations of protein binding sites and epigenetic modifications. These approaches contain numerous biases which may significantly influence the interpretation of the resulting data. Rigorous computational methods for detecting and removing such biases are still lacking. Also, multi-sample normalization still remains an important open problem. This theoretical paper systematically characterizes the biases and properties of ChIP-seq data by comparing 62 separate publicly available datasets, using rigorous statistical models and signal processing techniques. Statistical methods for separating ChIP-seq signal from background noise, as well as correcting enrichment test statistics for sequence-dependent and sonication biases, are presented. Our method effectively separates reads into signal and background components prior to normalization, improving the signal-to-noise ratio. Moreover, most peak callers currently use a generic null model which suffers from low specificity at the sensitivity level requisite for detecting subtle, but true, ChIP enrichment. The proposed method of determining a cell type-specific null model, which accounts for cell type-specific biases, is shown to be capable of achieving a lower false discovery rate at a given significance threshold than current methods. PMID:22499706

  5. Physiological effects of indomethacin and celecobix: an S-transform laser Doppler flowmetry signal analysis

    NASA Astrophysics Data System (ADS)

    Assous, S.; Humeau, A.; Tartas, M.; Abraham, P.; L'Huillier, J. P.

    2005-05-01

    Conventional signal processing typically involves frequency selective techniques which are highly inadequate for nonstationary signals. In this paper, we present an approach to perform time-frequency selective processing of laser Doppler flowmetry (LDF) signals using the S-transform. The approach is motivated by the excellent localization, in both time and frequency, afforded by the wavelet basis functions. Suitably chosen Gaussian wavelet functions are used to characterize the subspace of signals that have a given localized time-frequency support, thus enabling a time-frequency partitioning of signals. In this paper, the goal is to study the influence of various pharmacological substances taken by the oral way (celecobix (Celebrex®), indomethacin (Indocid®) and placebo) on the physiological activity behaviour. The results show that no statistical differences are observed in the energy computed from the time-frequency representation of LDF signals, for the myogenic, neurogenic and endothelial related metabolic activities between Celebrex and placebo, and Indocid and placebo. The work therefore proves that these drugs do not affect these physiological activities. For future physiological studies, there will therefore be no need to exclude patients having taken cyclo-oxygenase 1 inhibitions.

  6. Quantifying two-dimensional nonstationary signal with power-law correlations by detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Fan, Qingju; Wu, Yonghong

    2015-08-01

    In this paper, we develop a new method for the multifractal characterization of two-dimensional nonstationary signal, which is based on the detrended fluctuation analysis (DFA). By applying to two artificially generated signals of two-component ARFIMA process and binomial multifractal model, we show that the new method can reliably determine the multifractal scaling behavior of two-dimensional signal. We also illustrate the applications of this method in finance and physiology. The analyzing results exhibit that the two-dimensional signals under investigation are power-law correlations, and the electricity market consists of electricity price and trading volume is multifractal, while the two-dimensional EEG signal in sleep recorded for a single patient is weak multifractal. The new method based on the detrended fluctuation analysis may add diagnostic power to existing statistical methods.

  7. Non-parametric PCM to ADM conversion. [Pulse Code to Adaptive Delta Modulation

    NASA Technical Reports Server (NTRS)

    Locicero, J. L.; Schilling, D. L.

    1977-01-01

    An all-digital technique to convert pulse code modulated (PCM) signals into adaptive delta modulation (ADM) format is presented. The converter developed is shown to be independent of the statistical parameters of the encoded signal and can be constructed with only standard digital hardware. The structure of the converter is simple enough to be fabricated on a large scale integrated circuit where the advantages of reliability and cost can be optimized. A concise evaluation of this PCM to ADM translation technique is presented and several converters are simulated on a digital computer. A family of performance curves is given which displays the signal-to-noise ratio for sinusoidal test signals subjected to the conversion process, as a function of input signal power for several ratios of ADM rate to Nyquist rate.

  8. Phase error statistics of a phase-locked loop synchronized direct detection optical PPM communication system

    NASA Technical Reports Server (NTRS)

    Natarajan, Suresh; Gardner, C. S.

    1987-01-01

    Receiver timing synchronization of an optical Pulse-Position Modulation (PPM) communication system can be achieved using a phased-locked loop (PLL), provided the photodetector output is suitably processed. The magnitude of the PLL phase error is a good indicator of the timing error at the receiver decoder. The statistics of the phase error are investigated while varying several key system parameters such as PPM order, signal and background strengths, and PPL bandwidth. A practical optical communication system utilizing a laser diode transmitter and an avalanche photodiode in the receiver is described, and the sampled phase error data are presented. A linear regression analysis is applied to the data to obtain estimates of the relational constants involving the phase error variance and incident signal power.

  9. Detection of Cutting Tool Wear using Statistical Analysis and Regression Model

    NASA Astrophysics Data System (ADS)

    Ghani, Jaharah A.; Rizal, Muhammad; Nuawi, Mohd Zaki; Haron, Che Hassan Che; Ramli, Rizauddin

    2010-10-01

    This study presents a new method for detecting the cutting tool wear based on the measured cutting force signals. A statistical-based method called Integrated Kurtosis-based Algorithm for Z-Filter technique, called I-kaz was used for developing a regression model and 3D graphic presentation of I-kaz 3D coefficient during machining process. The machining tests were carried out using a CNC turning machine Colchester Master Tornado T4 in dry cutting condition. A Kistler 9255B dynamometer was used to measure the cutting force signals, which were transmitted, analyzed, and displayed in the DasyLab software. Various force signals from machining operation were analyzed, and each has its own I-kaz 3D coefficient. This coefficient was examined and its relationship with flank wear lands (VB) was determined. A regression model was developed due to this relationship, and results of the regression model shows that the I-kaz 3D coefficient value decreases as tool wear increases. The result then is used for real time tool wear monitoring.

  10. Coherent Doppler lidar signal covariance including wind shear and wind turbulence

    NASA Technical Reports Server (NTRS)

    Frehlich, R. G.

    1993-01-01

    The performance of coherent Doppler lidar is determined by the statistics of the coherent Doppler signal. The derivation and calculation of the covariance of the Doppler lidar signal is presented for random atmospheric wind fields with wind shear. The random component is described by a Kolmogorov turbulence spectrum. The signal parameters are clarified for a general coherent Doppler lidar system. There are two distinct physical regimes: one where the transmitted pulse determines the signal statistics and the other where the wind field dominates the signal statistics. The Doppler shift of the signal is identified in terms of the wind field and system parameters.

  11. The development of algorithms for the deployment of new version of GEM-detector-based acquisition system

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał D.; Czarski, Tomasz; Kolasiński, Piotr; Linczuk, Paweł; Poźniak, Krzysztof T.; Chernyshova, Maryna; Kasprowicz, Grzegorz; Wojeński, Andrzej; Zabolotny, Wojciech; Zienkiewicz, Paweł

    2016-09-01

    This article is an overview of what has been implemented in the process of development and testing the GEM detector based acquisition system in terms of post-processing algorithms. Information is given on mex functions for extended statistics collection, unified hex topology and optimized S-DAQ algorithm for splitting overlapped signals. Additional discussion on bottlenecks and major factors concerning optimization is presented.

  12. Bounding filter - A simple solution to lack of exact a priori statistics.

    NASA Technical Reports Server (NTRS)

    Nahi, N. E.; Weiss, I. M.

    1972-01-01

    Wiener and Kalman-Bucy estimation problems assume that models describing the signal and noise stochastic processes are exactly known. When this modeling information, i.e., the signal and noise spectral densities for Wiener filter and the signal and noise dynamic system and disturbing noise representations for Kalman-Bucy filtering, is inexactly known, then the filter's performance is suboptimal and may even exhibit apparent divergence. In this paper a system is designed whereby the actual estimation error covariance is bounded by the covariance calculated by the estimator. Therefore, the estimator obtains a bound on the actual error covariance which is not available, and also prevents its apparent divergence.

  13. Breaking chaotic secure communication using a spectrogram

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Yang, Lin-Bao; Yang, Chun-Mei

    1998-10-01

    We present the results of breaking a kind of chaotic secure communication system called chaotic switching scheme, also known as chaotic shift keying, in which a binary message signal is scrambled by two chaotic attractors. The spectrogram which can reveal the energy evolving process in the spectral-temporal space is used to distinguish the two different chaotic attractors, which are qualitatively and statistically similar in phase space. Then mathematical morphological filters are used to decode the binary message signal without the knowledge of the binary message signal and the transmitter. The computer experimental results are provided to show how our method works when both the chaotic and hyper-chaotic transmitter are used.

  14. Compensation for the signal processing characteristics of ultrasound B-mode scanners in adaptive speckle reduction.

    PubMed

    Crawford, D C; Bell, D S; Bamber, J C

    1993-01-01

    A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.

  15. Statistics of the Vestibular Input Experienced during Natural Self-Motion: Implications for Neural Processing

    PubMed Central

    Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J.

    2014-01-01

    It is widely believed that sensory systems are optimized for processing stimuli occurring in the natural environment. However, it remains unknown whether this principle applies to the vestibular system, which contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. Here we quantified, for the first time, the statistics of natural vestibular inputs experienced by freely moving human subjects during typical everyday activities. Although previous studies have found that the power spectra of natural signals across sensory modalities decay as a power law (i.e., as 1/fα), we found that this did not apply to natural vestibular stimuli. Instead, power decreased slowly at lower and more rapidly at higher frequencies for all motion dimensions. We further establish that this unique stimulus structure is the result of active motion as well as passive biomechanical filtering occurring before any neural processing. Notably, the transition frequency (i.e., frequency at which power starts to decrease rapidly) was lower when subjects passively experienced sensory stimulation than when they actively controlled stimulation through their own movement. In contrast to signals measured at the head, the spectral content of externally generated (i.e., passive) environmental motion did follow a power law. Specifically, transformations caused by both motor control and biomechanics shape the statistics of natural vestibular stimuli before neural processing. We suggest that the unique structure of natural vestibular stimuli will have important consequences on the neural coding strategies used by this essential sensory system to represent self-motion in everyday life. PMID:24920638

  16. The language of gene ontology: a Zipf's law analysis.

    PubMed

    Kalankesh, Leila Ranandeh; Stevens, Robert; Brass, Andy

    2012-06-07

    Most major genome projects and sequence databases provide a GO annotation of their data, either automatically or through human annotators, creating a large corpus of data written in the language of GO. Texts written in natural language show a statistical power law behaviour, Zipf's law, the exponent of which can provide useful information on the nature of the language being used. We have therefore explored the hypothesis that collections of GO annotations will show similar statistical behaviours to natural language. Annotations from the Gene Ontology Annotation project were found to follow Zipf's law. Surprisingly, the measured power law exponents were consistently different between annotation captured using the three GO sub-ontologies in the corpora (function, process and component). On filtering the corpora using GO evidence codes we found that the value of the measured power law exponent responded in a predictable way as a function of the evidence codes used to support the annotation. Techniques from computational linguistics can provide new insights into the annotation process. GO annotations show similar statistical behaviours to those seen in natural language with measured exponents that provide a signal which correlates with the nature of the evidence codes used to support the annotations, suggesting that the measured exponent might provide a signal regarding the information content of the annotation.

  17. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  18. High-Level Prediction Signals in a Low-Level Area of the Macaque Face-Processing Hierarchy.

    PubMed

    Schwiedrzik, Caspar M; Freiwald, Winrich A

    2017-09-27

    Theories like predictive coding propose that lower-order brain areas compare their inputs to predictions derived from higher-order representations and signal their deviation as a prediction error. Here, we investigate whether the macaque face-processing system, a three-level hierarchy in the ventral stream, employs such a coding strategy. We show that after statistical learning of specific face sequences, the lower-level face area ML computes the deviation of actual from predicted stimuli. But these signals do not reflect the tuning characteristic of ML. Rather, they exhibit identity specificity and view invariance, the tuning properties of higher-level face areas AL and AM. Thus, learning appears to endow lower-level areas with the capability to test predictions at a higher level of abstraction than what is afforded by the feedforward sweep. These results provide evidence for computational architectures like predictive coding and suggest a new quality of functional organization of information-processing hierarchies beyond pure feedforward schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. TreSpEx—Detection of Misleading Signal in Phylogenetic Reconstructions Based on Tree Information

    PubMed Central

    Struck, Torsten H

    2014-01-01

    Phylogenies of species or genes are commonplace nowadays in many areas of comparative biological studies. However, for phylogenetic reconstructions one must refer to artificial signals such as paralogy, long-branch attraction, saturation, or conflict between different datasets. These signals might eventually mislead the reconstruction even in phylogenomic studies employing hundreds of genes. Unfortunately, there has been no program allowing the detection of such effects in combination with an implementation into automatic process pipelines. TreSpEx (Tree Space Explorer) now combines different approaches (including statistical tests), which utilize tree-based information like nodal support or patristic distances (PDs) to identify misleading signals. The program enables the parallel analysis of hundreds of trees and/or predefined gene partitions, and being command-line driven, it can be integrated into automatic process pipelines. TreSpEx is implemented in Perl and supported on Linux, Mac OS X, and MS Windows. Source code, binaries, and additional material are freely available at http://www.annelida.de/research/bioinformatics/software.html. PMID:24701118

  20. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  1. On a Quantum Model of Brain Activities

    NASA Astrophysics Data System (ADS)

    Fichtner, K.-H.; Fichtner, L.; Freudenberg, W.; Ohya, M.

    2010-01-01

    One of the main activities of the brain is the recognition of signals. A first attempt to explain the process of recognition in terms of quantum statistics was given in [6]. Subsequently, details of the mathematical model were presented in a (still incomplete) series of papers (cf. [7, 2, 5, 10]). In the present note we want to give a general view of the principal ideas of this approach. We will introduce the basic spaces and justify the choice of spaces and operations. Further, we bring the model face to face with basic postulates any statistical model of the recognition process should fulfill. These postulates are in accordance with the opinion widely accepted in psychology and neurology.

  2. Structural health monitoring feature design by genetic programming

    NASA Astrophysics Data System (ADS)

    Harvey, Dustin Y.; Todd, Michael D.

    2014-09-01

    Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and other high-capital or life-safety critical structures. Conventional data processing involves pre-processing and extraction of low-dimensional features from in situ time series measurements. The features are then input to a statistical pattern recognition algorithm to perform the relevant classification or regression task necessary to facilitate decisions by the SHM system. Traditional design of signal processing and feature extraction algorithms can be an expensive and time-consuming process requiring extensive system knowledge and domain expertise. Genetic programming, a heuristic program search method from evolutionary computation, was recently adapted by the authors to perform automated, data-driven design of signal processing and feature extraction algorithms for statistical pattern recognition applications. The proposed method, called Autofead, is particularly suitable to handle the challenges inherent in algorithm design for SHM problems where the manifestation of damage in structural response measurements is often unclear or unknown. Autofead mines a training database of response measurements to discover information-rich features specific to the problem at hand. This study provides experimental validation on three SHM applications including ultrasonic damage detection, bearing damage classification for rotating machinery, and vibration-based structural health monitoring. Performance comparisons with common feature choices for each problem area are provided demonstrating the versatility of Autofead to produce significant algorithm improvements on a wide range of problems.

  3. A New Statistical Model of Electroencephalogram Noise Spectra for Real-Time Brain-Computer Interfaces.

    PubMed

    Paris, Alan; Atia, George K; Vosoughi, Azadeh; Berman, Stephen A

    2017-08-01

    A characteristic of neurological signal processing is high levels of noise from subcellular ion channels up to whole-brain processes. In this paper, we propose a new model of electroencephalogram (EEG) background periodograms, based on a family of functions which we call generalized van der Ziel-McWhorter (GVZM) power spectral densities (PSDs). To the best of our knowledge, the GVZM PSD function is the only EEG noise model that has relatively few parameters, matches recorded EEG PSD's with high accuracy from 0 to over 30 Hz, and has approximately 1/f θ behavior in the midfrequencies without infinities. We validate this model using three approaches. First, we show how GVZM PSDs can arise in a population of ion channels at maximum entropy equilibrium. Second, we present a class of mixed autoregressive models, which simulate brain background noise and whose periodograms are asymptotic to the GVZM PSD. Third, we present two real-time estimation algorithms for steady-state visual evoked potential (SSVEP) frequencies, and analyze their performance statistically. In pairwise comparisons, the GVZM-based algorithms showed statistically significant accuracy improvement over two well-known and widely used SSVEP estimators. The GVZM noise model can be a useful and reliable technique for EEG signal processing. Understanding EEG noise is essential for EEG-based neurology and applications such as real-time brain-computer interfaces, which must make accurate control decisions from very short data epochs. The GVZM approach represents a successful new paradigm for understanding and managing this neurological noise.

  4. Sources of Safety Data and Statistical Strategies for Design and Analysis: Transforming Data Into Evidence.

    PubMed

    Ma, Haijun; Russek-Cohen, Estelle; Izem, Rima; Marchenko, Olga V; Jiang, Qi

    2018-03-01

    Safety evaluation is a key aspect of medical product development. It is a continual and iterative process requiring thorough thinking, and dedicated time and resources. In this article, we discuss how safety data are transformed into evidence to establish and refine the safety profile of a medical product, and how the focus of safety evaluation, data sources, and statistical methods change throughout a medical product's life cycle. Some challenges and statistical strategies for medical product safety evaluation are discussed. Examples of safety issues identified in different periods, that is, premarketing and postmarketing, are discussed to illustrate how different sources are used in the safety signal identification and the iterative process of safety assessment. The examples highlighted range from commonly used pediatric vaccine given to healthy children to medical products primarily used to treat a medical condition in adults. These case studies illustrate that different products may require different approaches, and once a signal is discovered, it could impact future safety assessments. Many challenges still remain in this area despite advances in methodologies, infrastructure, public awareness, international harmonization, and regulatory enforcement. Innovations in safety assessment methodologies are pressing in order to make the medical product development process more efficient and effective, and the assessment of medical product marketing approval more streamlined and structured. Health care payers, providers, and patients may have different perspectives when weighing in on clinical, financial and personal needs when therapies are being evaluated.

  5. The effect of normalization of Partial Directed Coherence on the statistical assessment of connectivity patterns: a simulation study.

    PubMed

    Toppi, J; Petti, M; Vecchiato, G; Cincotti, F; Salinari, S; Mattia, D; Babiloni, F; Astolfi, L

    2013-01-01

    Partial Directed Coherence (PDC) is a spectral multivariate estimator for effective connectivity, relying on the concept of Granger causality. Even if its original definition derived directly from information theory, two modifies were introduced in order to provide better physiological interpretations of the estimated networks: i) normalization of the estimator according to rows, ii) squared transformation. In the present paper we investigated the effect of PDC normalization on the performances achieved by applying the statistical validation process on investigated connectivity patterns under different conditions of Signal to Noise ratio (SNR) and amount of data available for the analysis. Results of the statistical analysis revealed an effect of PDC normalization only on the percentages of type I and type II errors occurred by using Shuffling procedure for the assessment of connectivity patterns. No effects of the PDC formulation resulted on the performances achieved during the validation process executed instead by means of Asymptotic Statistic approach. Moreover, the percentages of both false positives and false negatives committed by Asymptotic Statistic are always lower than those achieved by Shuffling procedure for each type of normalization.

  6. Statistics of a neuron model driven by asymmetric colored noise.

    PubMed

    Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin

    2015-02-01

    Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.

  7. Generalized Hurst exponent estimates differentiate EEG signals of healthy and epileptic patients

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2018-01-01

    The aim of our current study is to check whether multifractal patterns of the electroencephalographic (EEG) signals of normal and epileptic patients are statistically similar or different. In this regard, the generalized Hurst exponent (GHE) method is used for robust estimation of the multifractals in each type of EEG signals, and three powerful statistical tests are performed to check existence of differences between estimated GHEs from healthy control subjects and epileptic patients. The obtained results show that multifractals exist in both types of EEG signals. Particularly, it was found that the degree of fractal is more pronounced in short variations of normal EEG signals than in short variations of EEG signals with seizure free intervals. In contrary, it is more pronounced in long variations of EEG signals with seizure free intervals than in normal EEG signals. Importantly, both parametric and nonparametric statistical tests show strong evidence that estimated GHEs of normal EEG signals are statistically and significantly different from those with seizure free intervals. Therefore, GHEs can be efficiently used to distinguish between healthy and patients suffering from epilepsy.

  8. Intensity invariance properties of auditory neurons compared to the statistics of relevant natural signals in grasshoppers.

    PubMed

    Clemens, Jan; Weschke, Gerroth; Vogel, Astrid; Ronacher, Bernhard

    2010-04-01

    The temporal pattern of amplitude modulations (AM) is often used to recognize acoustic objects. To identify objects reliably, intensity invariant representations have to be formed. We approached this problem within the auditory pathway of grasshoppers. We presented AM patterns modulated at different time scales and intensities. Metric space analysis of neuronal responses allowed us to determine how well, how invariantly, and at which time scales AM frequency is encoded. We find that in some neurons spike-count cues contribute substantially (20-60%) to the decoding of AM frequency at a single intensity. However, such cues are not robust when intensity varies. The general intensity invariance of the system is poor. However, there exists a range of AM frequencies around 83 Hz where intensity invariance of local interneurons is relatively high. In this range, natural communication signals exhibit much variation between species, suggesting an important behavioral role for this frequency band. We hypothesize, just as has been proposed for human speech, that the communication signals might have evolved to match the processing properties of the receivers. This contrasts with optimal coding theory, which postulates that neuronal systems are adapted to the statistics of the relevant signals.

  9. Automated Monitoring with a BSP Fault-Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L.; Herzog, James P.

    2003-01-01

    The figure schematically illustrates a method and procedure for automated monitoring of an asset, as well as a hardware- and-software system that implements the method and procedure. As used here, asset could signify an industrial process, power plant, medical instrument, aircraft, or any of a variety of other systems that generate electronic signals (e.g., sensor outputs). In automated monitoring, the signals are digitized and then processed in order to detect faults and otherwise monitor operational status and integrity of the monitored asset. The major distinguishing feature of the present method is that the fault-detection function is implemented by use of a Bayesian sequential probability (BSP) technique. This technique is superior to other techniques for automated monitoring because it affords sensitivity, not only to disturbances in the mean values, but also to very subtle changes in the statistical characteristics (variance, skewness, and bias) of the monitored signals.

  10. Higher-order cumulants and spectral kurtosis for early detection of subterranean termites

    NASA Astrophysics Data System (ADS)

    de la Rosa, Juan José González; Moreno Muñoz, Antonio

    2008-02-01

    This paper deals with termite detection in non-favorable SNR scenarios via signal processing using higher-order statistics. The results could be extrapolated to all impulse-like insect emissions; the situation involves non-destructive termite detection. Fourth-order cumulants in time and frequency domains enhance the detection and complete the characterization of termite emissions, non-Gaussian in essence. Sliding higher-order cumulants offer distinctive time instances, as a complement to the sliding variance, which only reveal power excesses in the signal; even for low-amplitude impulses. The spectral kurtosis reveals non-Gaussian characteristics (the peakedness of the probability density function) associated to these non-stationary measurements, specially in the near ultrasound frequency band. Contrasted estimators have been used to compute the higher-order statistics. The inedited findings are shown via graphical examples.

  11. Statistical learning and adaptive decision-making underlie human response time variability in inhibitory control.

    PubMed

    Ma, Ning; Yu, Angela J

    2015-01-01

    Response time (RT) is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task (SST), in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop), and stop-signal onset time, SSD (stop-signal delay), with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop) and SSD. The human behavioral data (n = 20) bear out this prediction, showing P(stop) and SSD both to be significant, independent predictors of RT, with P(stop) being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making.

  12. Identifying Image Manipulation Software from Image Features

    DTIC Science & Technology

    2015-03-26

    obp/ui/#iso:std:iso:ts:22028:-3:ed-2: v1:en. 24. Popescu, Alin and Hany Farid. “Exposing digital forgeries by detecting traces of resampling”. Signal...Processing, IEEE Transactions on, 53(2):758–767, 2005. 25. Popescu, Alin and Hany Farid. “Statistical tools for digital forensics”. Informa- tion

  13. Sleepers' Lag: Study on Motion and Attention

    ERIC Educational Resources Information Center

    Raca, Mirko; Tormey, Roland; Dillenbourg, Pierre

    2016-01-01

    Body language is an essential source of information in everyday communication. Low signal-to-noise ratio prevents us from using it in the automatic processing of student behaviour, an obstacle that we are slowly overcoming with advanced statistical methods. Instead of profiling individual behaviour of students in the classroom, the idea is to…

  14. Parameter Estimation for Real Filtered Sinusoids

    DTIC Science & Technology

    1997-09-01

    Statistical Signal Processing: Detection, Estimation and Time Series Analysis. New York: Addison-Wesley, 1991. 74. Serway , Raymond A . Physics for...Dr. Yung Kee Yeo, Dean’s Representative Dr. Robert A . Calico, Jr., Dean Table of Contents Page List of Abbreviations...Contributions ....... ...................... 5-4 5.4 Summary ........ ............................. 5-6 Appendix A . Vector-Matrix Differentiation

  15. Automated Identification and Shape Analysis of Chorus Elements in the Van Allen Radiation Belts

    NASA Astrophysics Data System (ADS)

    Sen Gupta, Ananya; Kletzing, Craig; Howk, Robin; Kurth, William; Matheny, Morgan

    2017-12-01

    An important goal of the Van Allen Probes mission is to understand wave-particle interaction by chorus emissions in terrestrial Van Allen radiation belts. To test models, statistical characterization of chorus properties, such as amplitude variation and sweep rates, is an important scientific goal. The Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrumentation suite provides measurements of wave electric and magnetic fields as well as DC magnetic fields for the Van Allen Probes mission. However, manual inspection across terabytes of EMFISIS data is not feasible and as such introduces human confirmation bias. We present signal processing techniques for automated identification, shape analysis, and sweep rate characterization of high-amplitude whistler-mode chorus elements in the Van Allen radiation belts. Specifically, we develop signal processing techniques based on the radon transform that disambiguate chorus elements with a dominant sweep rate against hiss-like chorus. We present representative results validating our techniques and also provide statistical characterization of detected chorus elements across a case study of a 6 s epoch.

  16. Performance analysis of gamma ray spectrometric parameters on digital signal and analog signal processing based MCA systems using NaI(Tl) detector.

    PubMed

    Kukreti, B M; Sharma, G K

    2012-05-01

    Accurate and speedy estimations of ppm range uranium and thorium in the geological and rock samples are most useful towards ongoing uranium investigations and identification of favorable radioactive zones in the exploration field areas. In this study with the existing 5 in. × 4 in. NaI(Tl) detector setup, prevailing background and time constraints, an enhanced geometrical setup has been worked out to improve the minimum detection limits for primordial radioelements K(40), U(238) and Th(232). This geometrical setup has been integrated with the newly introduced, digital signal processing based MCA system for the routine spectrometric analysis of low concentration rock samples. Stability performance, during the long counting hours, for digital signal processing MCA system and its predecessor NIM bin based MCA system has been monitored, using the concept of statistical process control. Monitored results, over a time span of few months, have been quantified in terms of spectrometer's parameters such as Compton striping constants and Channel sensitivities, used for evaluating primordial radio element concentrations (K(40), U(238) and Th(232)) in geological samples. Results indicate stable dMCA performance, with a tendency of higher relative variance, about mean, particularly for Compton stripping constants. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Frequency Domain Analysis of Sensor Data for Event Classification in Real-Time Robot Assisted Deburring

    PubMed Central

    Pappachan, Bobby K; Caesarendra, Wahyu; Tjahjowidodo, Tegoeh; Wijaya, Tomi

    2017-01-01

    Process monitoring using indirect methods relies on the usage of sensors. Using sensors to acquire vital process related information also presents itself with the problem of big data management and analysis. Due to uncertainty in the frequency of events occurring, a higher sampling rate is often used in real-time monitoring applications to increase the chances of capturing and understanding all possible events related to the process. Advanced signal processing methods are used to further decipher meaningful information from the acquired data. In this research work, power spectrum density (PSD) of sensor data acquired at sampling rates between 40–51.2 kHz was calculated and the corelation between PSD and completed number of cycles/passes is presented. Here, the progress in number of cycles/passes is the event this research work intends to classify and the algorithm used to compute PSD is Welch’s estimate method. A comparison between Welch’s estimate method and statistical methods is also discussed. A clear co-relation was observed using Welch’s estimate to classify the number of cycles/passes. The paper also succeeds in classifying vibration signal generated by the spindle from the vibration signal acquired during finishing process. PMID:28556809

  18. Optical rogue-wave-like extreme value fluctuations in fiber Raman amplifiers.

    PubMed

    Hammani, Kamal; Finot, Christophe; Dudley, John M; Millot, Guy

    2008-10-13

    We report experimental observation and characterization of rogue wave-like extreme value statistics arising from pump-signal noise transfer in a fiber Raman amplifier. Specifically, by exploiting Raman amplification with an incoherent pump, the amplified signal is shown to develop a series of temporal intensity spikes whose peak power follows a power-law probability distribution. The results are interpreted using a numerical model of the Raman gain process using coupled nonlinear Schrödinger equations, and the numerical model predicts results in good agreement with experiment.

  19. Statistical evaluation of GLONASS amplitude scintillation over low latitudes in the Brazilian territory

    NASA Astrophysics Data System (ADS)

    de Oliveira Moraes, Alison; Muella, Marcio T. A. H.; de Paula, Eurico R.; de Oliveira, César B. A.; Terra, William P.; Perrella, Waldecir J.; Meibach-Rosa, Pâmela R. P.

    2018-04-01

    The ionospheric scintillation, generated by the ionospheric plasma irregularities, affects the radio signals that pass through it. Their effects are widely studied in the literature with two different approaches. The first one deals with the use of radio signals to study and understand the morphology of this phenomenon, while the second one seeks to understand and model how much this phenomenon interferes in the radio signals and consequently in the services to which these systems work. The interest of several areas, particularly to those that are life critical, has increased using the concept of satellite multi-constellation, which consists of receiving, processing and using data from different navigation and positioning systems. Although there is a vast literature analyzing the effects of ionospheric scintillation on satellite navigation systems, the number of studies using signals received from the Russian satellite positioning system (named GLONASS) is still very rare. This work presents for the first time in the Brazilian low-latitude sector a statistical analysis of ionospheric scintillation data for all levels of magnetic activities obtained by a set of scintillation monitors that receive signals from the GLONASS system. In this study, data collected from four stations were used in the analysis; Fortaleza, Presidente Prudente, São José dos Campos and Porto Alegre. The GLONASS L-band signals were analyzed for the period from December 21, 2012 to June 20, 2016, which includes the peak of the solar cycle 24 that occurred in 2014. The main characteristics of scintillation presented in this study include: (1) the statistical evaluation of seasonal and solar activity, showing the chances that an user on similar geophysical conditions may be susceptible to the effects of ionospheric scintillation; (2) a temporal analysis based on the local time distribution of scintillation at different seasons and intensity levels; and (3) the evaluation of number of simultaneously affected channels and its effects on the dilution of precision (DOP) for GNSS users are also presented in order to alert the timetables in which navigation will be most susceptible to such effects, as well as statistics on simultaneously affected channels. Relevant results about these statistical characteristics of scintillation are presented and analyzed providing relevant information about availability of a navigation system.

  20. Kepler AutoRegressive Planet Search: Motivation & Methodology

    NASA Astrophysics Data System (ADS)

    Caceres, Gabriel; Feigelson, Eric; Jogesh Babu, G.; Bahamonde, Natalia; Bertin, Karine; Christen, Alejandra; Curé, Michel; Meza, Cristian

    2015-08-01

    The Kepler AutoRegressive Planet Search (KARPS) project uses statistical methodology associated with autoregressive (AR) processes to model Kepler lightcurves in order to improve exoplanet transit detection in systems with high stellar variability. We also introduce a planet-search algorithm to detect transits in time-series residuals after application of the AR models. One of the main obstacles in detecting faint planetary transits is the intrinsic stellar variability of the host star. The variability displayed by many stars may have autoregressive properties, wherein later flux values are correlated with previous ones in some manner. Auto-Regressive Moving-Average (ARMA) models, Generalized Auto-Regressive Conditional Heteroskedasticity (GARCH), and related models are flexible, phenomenological methods used with great success to model stochastic temporal behaviors in many fields of study, particularly econometrics. Powerful statistical methods are implemented in the public statistical software environment R and its many packages. Modeling involves maximum likelihood fitting, model selection, and residual analysis. These techniques provide a useful framework to model stellar variability and are used in KARPS with the objective of reducing stellar noise to enhance opportunities to find as-yet-undiscovered planets. Our analysis procedure consisting of three steps: pre-processing of the data to remove discontinuities, gaps and outliers; ARMA-type model selection and fitting; and transit signal search of the residuals using a new Transit Comb Filter (TCF) that replaces traditional box-finding algorithms. We apply the procedures to simulated Kepler-like time series with known stellar and planetary signals to evaluate the effectiveness of the KARPS procedures. The ARMA-type modeling is effective at reducing stellar noise, but also reduces and transforms the transit signal into ingress/egress spikes. A periodogram based on the TCF is constructed to concentrate the signal of these periodic spikes. When a periodic transit is found, the model is displayed on a standard period-folded averaged light curve. We also illustrate the efficient coding in R.

  1. Competitive Processes in Cross-Situational Word Learning

    PubMed Central

    Yurovsky, Daniel; Yu, Chen; Smith, Linda B.

    2013-01-01

    Cross-situational word learning, like any statistical learning problem, involves tracking the regularities in the environment. But the information that learners pick up from these regularities is dependent on their learning mechanism. This paper investigates the role of one type of mechanism in statistical word learning: competition. Competitive mechanisms would allow learners to find the signal in noisy input, and would help to explain the speed with which learners succeed in statistical learning tasks. Because cross-situational word learning provides information at multiple scales – both within and across trials/situations –learners could implement competition at either or both of these scales. A series of four experiments demonstrate that cross-situational learning involves competition at both levels of scale, and that these mechanisms interact to support rapid learning. The impact of both of these mechanisms is then considered from the perspective of a process-level understanding of cross-situational learning. PMID:23607610

  2. Competitive processes in cross-situational word learning.

    PubMed

    Yurovsky, Daniel; Yu, Chen; Smith, Linda B

    2013-07-01

    Cross-situational word learning, like any statistical learning problem, involves tracking the regularities in the environment. However, the information that learners pick up from these regularities is dependent on their learning mechanism. This article investigates the role of one type of mechanism in statistical word learning: competition. Competitive mechanisms would allow learners to find the signal in noisy input and would help to explain the speed with which learners succeed in statistical learning tasks. Because cross-situational word learning provides information at multiple scales-both within and across trials/situations-learners could implement competition at either or both of these scales. A series of four experiments demonstrate that cross-situational learning involves competition at both levels of scale, and that these mechanisms interact to support rapid learning. The impact of both of these mechanisms is considered from the perspective of a process-level understanding of cross-situational learning. Copyright © 2013 Cognitive Science Society, Inc.

  3. Sparsity-Cognizant Algorithms with Applications to Communications, Signal Processing, and the Smart Grid

    NASA Astrophysics Data System (ADS)

    Zhu, Hao

    Sparsity plays an instrumental role in a plethora of scientific fields, including statistical inference for variable selection, parsimonious signal representations, and solving under-determined systems of linear equations - what has led to the ground-breaking result of compressive sampling (CS). This Thesis leverages exciting ideas of sparse signal reconstruction to develop sparsity-cognizant algorithms, and analyze their performance. The vision is to devise tools exploiting the 'right' form of sparsity for the 'right' application domain of multiuser communication systems, array signal processing systems, and the emerging challenges in the smart power grid. Two important power system monitoring tasks are addressed first by capitalizing on the hidden sparsity. To robustify power system state estimation, a sparse outlier model is leveraged to capture the possible corruption in every datum, while the problem nonconvexity due to nonlinear measurements is handled using the semidefinite relaxation technique. Different from existing iterative methods, the proposed algorithm approximates well the global optimum regardless of the initialization. In addition, for enhanced situational awareness, a novel sparse overcomplete representation is introduced to capture (possibly multiple) line outages, and develop real-time algorithms for solving the combinatorially complex identification problem. The proposed algorithms exhibit near-optimal performance while incurring only linear complexity in the number of lines, which makes it possible to quickly bring contingencies to attention. This Thesis also accounts for two basic issues in CS, namely fully-perturbed models and the finite alphabet property. The sparse total least-squares (S-TLS) approach is proposed to furnish CS algorithms for fully-perturbed linear models, leading to statistically optimal and computationally efficient solvers. The S-TLS framework is well motivated for grid-based sensing applications and exhibits higher accuracy than existing sparse algorithms. On the other hand, exploiting the finite alphabet of unknown signals emerges naturally in communication systems, along with sparsity coming from the low activity of each user. Compared to approaches only accounting for either one of the two, joint exploitation of both leads to statistically optimal detectors with improved error performance.

  4. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    NASA Astrophysics Data System (ADS)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.

  5. SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output

    PubMed Central

    Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.

    2011-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297

  6. SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†

    PubMed Central

    Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.

    2013-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136

  7. Vesicle Motion during Sustained Exocytosis in Chromaffin Cells: Numerical Model Based on Amperometric Measurements.

    PubMed

    Jarukanont, Daungruthai; Bonifas Arredondo, Imelda; Femat, Ricardo; Garcia, Martin E

    2015-01-01

    Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles' arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We also show that including the fusion-time statistics in our model does not produce any significant changes on the results. These findings indicate that the motion of the whole ensemble of vesicles towards the membrane is directed and reflected in the amperometric signals. Our results confirm the conclusions of previous imaging studies performed on single vesicles that vesicles' motion underneath plasma membranes is not purely random, but biased towards the membrane.

  8. Vesicle Motion during Sustained Exocytosis in Chromaffin Cells: Numerical Model Based on Amperometric Measurements

    PubMed Central

    Jarukanont, Daungruthai; Bonifas Arredondo, Imelda; Femat, Ricardo; Garcia, Martin E.

    2015-01-01

    Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles’ arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We also show that including the fusion-time statistics in our model does not produce any significant changes on the results. These findings indicate that the motion of the whole ensemble of vesicles towards the membrane is directed and reflected in the amperometric signals. Our results confirm the conclusions of previous imaging studies performed on single vesicles that vesicles’ motion underneath plasma membranes is not purely random, but biased towards the membrane. PMID:26675312

  9. Signal Detection Techniques for Diagnostic Monitoring of Space Shuttle Main Engine Turbomachinery

    NASA Technical Reports Server (NTRS)

    Coffin, Thomas; Jong, Jen-Yi

    1986-01-01

    An investigation to develop, implement, and evaluate signal analysis techniques for the detection and classification of incipient mechanical failures in turbomachinery is reviewed. A brief description of the Space Shuttle Main Engine (SSME) test/measurement program is presented. Signal analysis techniques available to describe dynamic measurement characteristics are reviewed. Time domain and spectral methods are described, and statistical classification in terms of moments is discussed. Several of these waveform analysis techniques have been implemented on a computer and applied to dynamc signals. A laboratory evaluation of the methods with respect to signal detection capability is described. A unique coherence function (the hyper-coherence) was developed through the course of this investigation, which appears promising as a diagnostic tool. This technique and several other non-linear methods of signal analysis are presented and illustrated by application. Software for application of these techniques has been installed on the signal processing system at the NASA/MSFC Systems Dynamics Laboratory.

  10. Characterization of Sensory-Motor Behavior Under Cognitive Load Using a New Statistical Platform for Studies of Embodied Cognition

    PubMed Central

    Ryu, Jihye; Torres, Elizabeth B.

    2018-01-01

    The field of enacted/embodied cognition has emerged as a contemporary attempt to connect the mind and body in the study of cognition. However, there has been a paucity of methods that enable a multi-layered approach tapping into different levels of functionality within the nervous systems (e.g., continuously capturing in tandem multi-modal biophysical signals in naturalistic settings). The present study introduces a new theoretical and statistical framework to characterize the influences of cognitive demands on biophysical rhythmic signals harnessed from deliberate, spontaneous and autonomic activities. In this study, nine participants performed a basic pointing task to communicate a decision while they were exposed to different levels of cognitive load. Within these decision-making contexts, we examined the moment-by-moment fluctuations in the peak amplitude and timing of the biophysical time series data (e.g., continuous waveforms extracted from hand kinematics and heart signals). These spike-trains data offered high statistical power for personalized empirical statistical estimation and were well-characterized by a Gamma process. Our approach enabled the identification of different empirically estimated families of probability distributions to facilitate inference regarding the continuous physiological phenomena underlying cognitively driven decision-making. We found that the same pointing task revealed shifts in the probability distribution functions (PDFs) of the hand kinematic signals under study and were accompanied by shifts in the signatures of the heart inter-beat-interval timings. Within the time scale of an experimental session, marked changes in skewness and dispersion of the distributions were tracked on the Gamma parameter plane with 95% confidence. The results suggest that traditional theoretical assumptions of stationarity and normality in biophysical data from the nervous systems are incongruent with the true statistical nature of empirical data. This work offers a unifying platform for personalized statistical inference that goes far beyond those used in conventional studies, often assuming a “one size fits all model” on data drawn from discrete events such as mouse clicks, and observations that leave out continuously co-occurring spontaneous activity taking place largely beneath awareness. PMID:29681805

  11. Statistics of the vestibular input experienced during natural self-motion: implications for neural processing.

    PubMed

    Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J; Cullen, Kathleen E

    2014-06-11

    It is widely believed that sensory systems are optimized for processing stimuli occurring in the natural environment. However, it remains unknown whether this principle applies to the vestibular system, which contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. Here we quantified, for the first time, the statistics of natural vestibular inputs experienced by freely moving human subjects during typical everyday activities. Although previous studies have found that the power spectra of natural signals across sensory modalities decay as a power law (i.e., as 1/f(α)), we found that this did not apply to natural vestibular stimuli. Instead, power decreased slowly at lower and more rapidly at higher frequencies for all motion dimensions. We further establish that this unique stimulus structure is the result of active motion as well as passive biomechanical filtering occurring before any neural processing. Notably, the transition frequency (i.e., frequency at which power starts to decrease rapidly) was lower when subjects passively experienced sensory stimulation than when they actively controlled stimulation through their own movement. In contrast to signals measured at the head, the spectral content of externally generated (i.e., passive) environmental motion did follow a power law. Specifically, transformations caused by both motor control and biomechanics shape the statistics of natural vestibular stimuli before neural processing. We suggest that the unique structure of natural vestibular stimuli will have important consequences on the neural coding strategies used by this essential sensory system to represent self-motion in everyday life. Copyright © 2014 the authors 0270-6474/14/348347-11$15.00/0.

  12. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  13. Acceleration to failure in geophysical signals prior to laboratory rock failure and volcanic eruptions (Invited)

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Bell, A. F.; Greenhough, J.; Heap, M. J.; Meredith, P. G.

    2010-12-01

    The nucleation processes that ultimately lead to earthquakes, volcanic eruptions, rock bursts in mines, and landslides from cliff slopes are likely to be controlled at some scale by brittle failure of the Earth’s crust. In laboratory brittle deformation experiments geophysical signals commonly exhibit an accelerating trend prior to dynamic failure. Similar signals have been observed prior to volcanic eruptions, including volcano-tectonic earthquake event and moment release rates. Despite a large amount of effort in the search, no such statistically robust systematic trend is found prior to natural earthquakes. Here we describe the results of a suite of laboratory tests on Mount Etna Basalt and other rocks to examine the nature of the non-linear scaling from laboratory to field conditions, notably using laboratory ‘creep’ tests to reduce the boundary strain rate to conditions more similar to those in the field. Seismic event rate, seismic moment release rate and rate of porosity change show a classic ‘bathtub’ graph that can be derived from a simple damage model based on separate transient and accelerating sub-critical crack growth mechanisms, resulting from separate processes of negative and positive feedback in the population dynamics. The signals exhibit clear precursors based on formal statistical model tests using maximum likelihood techniques with Poisson errors. After correcting for the finite loading time of the signal, the results show a transient creep rate that decays as a classic Omori law for earthquake aftershocks, and remarkably with an exponent near unity, as commonly observed for natural earthquake sequences. The accelerating trend follows an inverse power law when fitted in retrospect, i.e. with prior knowledge of the failure time. In contrast the strain measured on the sample boundary shows a less obvious but still accelerating signal that is often absent altogether in natural strain data prior to volcanic eruptions. To test the forecasting power of such constitutive rules in prospective mode, we examine the forecast quality of several synthetic trials, by adding representative statistical fluctuations, due to finite real-time sampling effects, to an underlying accelerating trend. Metrics of forecast quality change systematically and dramatically with time. In particular the model accuracy increases, and the forecast bias decreases, as the failure time approaches.

  14. Learning multimodal dictionaries.

    PubMed

    Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi

    2007-09-01

    Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.

  15. Laser Doppler, velocimeter system for turbine stator cascade studies and analysis of statistical biasing errors

    NASA Technical Reports Server (NTRS)

    Seasholtz, R. G.

    1977-01-01

    A laser Doppler velocimeter (LDV) built for use in the Lewis Research Center's turbine stator cascade facilities is described. The signal processing and self contained data processing are based on a computing counter. A procedure is given for mode matching the laser to the probe volume. An analysis is presented of biasing errors that were observed in turbulent flow when the mean flow was not normal to the fringes.

  16. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  17. SNDR enhancement in noisy sinusoidal signals by non-linear processing elements

    NASA Astrophysics Data System (ADS)

    Martorell, Ferran; McDonnell, Mark D.; Abbott, Derek; Rubio, Antonio

    2007-06-01

    We investigate the possibility of building linear amplifiers capable of enhancing the Signal-to-Noise and Distortion Ratio (SNDR) of sinusoidal input signals using simple non-linear elements. Other works have proven that it is possible to enhance the Signal-to-Noise Ratio (SNR) by using limiters. In this work we study a soft limiter non-linear element with and without hysteresis. We show that the SNDR of sinusoidal signals can be enhanced by 0.94 dB using a wideband soft limiter and up to 9.68 dB using a wideband soft limiter with hysteresis. These results indicate that linear amplifiers could be constructed using non-linear circuits with hysteresis. This paper presents mathematical descriptions for the non-linear elements using statistical parameters. Using these models, the input-output SNDR enhancement is obtained by optimizing the non-linear transfer function parameters to maximize the output SNDR.

  18. Dual-drive Mach-Zehnder modulator-based reconfigurable and transparent spectral conversion for dense wavelength division multiplexing transmissions

    NASA Astrophysics Data System (ADS)

    Mao, Mingzhi; Qian, Chen; Cao, Bingyao; Zhang, Qianwu; Song, Yingxiong; Wang, Min

    2017-09-01

    A digital signal process enabled dual-drive Mach-Zehnder modulator (DD-MZM)-based spectral converter is proposed and extensively investigated to realize dynamically reconfigurable and high transparent spectral conversion. As another important innovation point of the paper, to optimize the converter performance, the optimum operation conditions of the proposed converter are deduced, statistically simulated, and experimentally verified. The optimum conditions supported-converter performances are verified by detail numerical simulations and experiments in intensity-modulation and direct-detection-based network in terms of frequency detuning range-dependent conversion efficiency, strict operation transparency for user signal characteristics, impact of parasitic components on the conversion performance, as well as the converted component waveform are almost nondistortion. It is also found that the converter has the high robustness to the input signal power, optical signal-to-noise ratio variations, extinction ratio, and driving signal frequency.

  19. Discrete wavelet-aided delineation of PCG signal events via analysis of an area curve length-based decision statistic.

    PubMed

    Homaeinezhad, M R; Atyabi, S A; Daneshvar, E; Ghaffari, A; Tahmasebi, M

    2010-12-01

    The aim of this study is to describe a robust unified framework for segmentation of the phonocardiogram (PCG) signal sounds based on the false-alarm probability (FAP) bounded segmentation of a properly calculated detection measure. To this end, first the original PCG signal is appropriately pre-processed and then, a fixed sample size sliding window is moved on the pre-processed signal. In each slid, the area under the excerpted segment is multiplied by its curve-length to generate the Area Curve Length (ACL) metric to be used as the segmentation decision statistic (DS). Afterwards, histogram parameters of the nonlinearly enhanced DS metric are used for regulation of the α-level Neyman-Pearson classifier for FAP-bounded delineation of the PCG events. The proposed method was applied to all 85 records of Nursing Student Heart Sounds database (NSHSDB) including stenosis, insufficiency, regurgitation, gallop, septal defect, split sound, rumble, murmur, clicks, friction rub and snap disorders with different sampling frequencies. Also, the method was applied to the records obtained from an electronic stethoscope board designed for fulfillment of this study in the presence of high-level power-line noise and external disturbing sounds and as the results, no false positive (FP) or false negative (FN) errors were detected. High noise robustness, acceptable detection-segmentation accuracy of PCG events in various cardiac system conditions, and having no parameters dependency to the acquisition sampling frequency can be mentioned as the principal virtues and abilities of the proposed ACL-based PCG events detection-segmentation algorithm.

  20. Scientific and Engineering Studies. Compiled 1983. Signal Processing Studies

    DTIC Science & Technology

    1983-01-01

    figure 12D in the neighborhood of this frequency can serve as an indicator of the number of poles and zeros clustered there. When Pmax was increased...use >(.) to estimate (iv)If coherence CXy(f0) ■ I, why use the completely statistically dependent y(t) data to estimate GMX (f0)1 This philosophy of

  1. Contribution of Implicit Sequence Learning to Spoken Language Processing: Some Preliminary Findings with Hearing Adults

    ERIC Educational Resources Information Center

    Conway, Christopher M.; Karpicke, Jennifer; Pisoni, David B.

    2007-01-01

    Spoken language consists of a complex, sequentially arrayed signal that contains patterns that can be described in terms of statistical relations among language units. Previous research has suggested that a domain-general ability to learn structured sequential patterns may underlie language acquisition. To test this prediction, we examined the…

  2. LOWLY CHLORINATED DIBENZODIOXINS AS TEQ INDICATORS. A COMBINED APPROACH USING SPECTROSCOPIC MEASUREMENTS WITH DLR JET-REMPI AND STATISTICAL CORRELATIONS WITH WASTE COMBUSTOR EMISSIONS

    EPA Science Inventory

    Continuous monitoring of trace gas species in incineration processes can serve two purposes: (i) monitoring precursors of polychlorinated dibenzodioxin and polychlorinated dibenzofuran (PCDD/F) or other indicator species in the raw gas will enable use of their on-line signals for...

  3. The Department of Defense Very High Speed Integrated Circuit (VHSIC) Technology Availability Program Plan for the Committees on Armed Services United States Congress.

    DTIC Science & Technology

    1986-06-30

    features of computer aided design systems and statistical quality control procedures that are generic to chip sets and processes. RADIATION HARDNESS -The...System PSP Programmable Signal Processor SSI Small Scale Integration ." TOW Tube Launched, Optically Tracked, Wire Guided TTL Transistor Transitor Logic

  4. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  5. Optimal filtering and Bayesian detection for friction-based diagnostics in machines.

    PubMed

    Ray, L R; Townsend, J R; Ramasubramanian, A

    2001-01-01

    Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.

  6. Micro-Pulse Lidar Signals: Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Starr, David OC. (Technical Monitor)

    2002-01-01

    Micro-pulse lidar (MPL) systems are small, autonomous, eye-safe lidars used for continuous observations of the vertical distribution of cloud and aerosol layers. Since the construction of the first MPL in 1993, procedures have been developed to correct for various instrument effects present in MPL signals. The primary instrument effects include afterpulse, laser-detector cross-talk, and overlap, poor near-range (less than 6 km) focusing. The accurate correction of both afterpulse and overlap effects are required to study both clouds and aerosols. Furthermore, the outgoing energy of the laser pulses and the statistical uncertainty of the MPL detector must also be correctly determined in order to assess the accuracy of MPL observations. The uncertainties associated with the afterpulse, overlap, pulse energy, detector noise, and all remaining quantities affecting measured MPL signals, are determined in this study. The uncertainties are propagated through the entire MPL correction process to give a net uncertainty on the final corrected MPL signal. The results show that in the near range, the overlap uncertainty dominates. At altitudes above the overlap region, the dominant source of uncertainty is caused by uncertainty in the pulse energy. However, if the laser energy is low, then during mid-day, high solar background levels can significantly reduce the signal-to-noise of the detector. In such a case, the statistical uncertainty of the detector count rate becomes dominant at altitudes above the overlap region.

  7. Quantitative Aspects of Single Molecule Microscopy

    PubMed Central

    Ober, Raimund J.; Tahmasbi, Amir; Ram, Sripad; Lin, Zhiping; Ward, E. Sally

    2015-01-01

    Single molecule microscopy is a relatively new optical microscopy technique that allows the detection of individual molecules such as proteins in a cellular context. This technique has generated significant interest among biologists, biophysicists and biochemists, as it holds the promise to provide novel insights into subcellular processes and structures that otherwise cannot be gained through traditional experimental approaches. Single molecule experiments place stringent demands on experimental and algorithmic tools due to the low signal levels and the presence of significant extraneous noise sources. Consequently, this has necessitated the use of advanced statistical signal and image processing techniques for the design and analysis of single molecule experiments. In this tutorial paper, we provide an overview of single molecule microscopy from early works to current applications and challenges. Specific emphasis will be on the quantitative aspects of this imaging modality, in particular single molecule localization and resolvability, which will be discussed from an information theoretic perspective. We review the stochastic framework for image formation, different types of estimation techniques and expressions for the Fisher information matrix. We also discuss several open problems in the field that demand highly non-trivial signal processing algorithms. PMID:26167102

  8. Simultaneously estimating evolutionary history and repeated traits phylogenetic signal: applications to viral and host phenotypic evolution

    PubMed Central

    Vrancken, Bram; Lemey, Philippe; Rambaut, Andrew; Bedford, Trevor; Longdon, Ben; Günthard, Huldrych F.; Suchard, Marc A.

    2014-01-01

    Phylogenetic signal quantifies the degree to which resemblance in continuously-valued traits reflects phylogenetic relatedness. Measures of phylogenetic signal are widely used in ecological and evolutionary research, and are recently gaining traction in viral evolutionary studies. Standard estimators of phylogenetic signal frequently condition on data summary statistics of the repeated trait observations and fixed phylogenetics trees, resulting in information loss and potential bias. To incorporate the observation process and phylogenetic uncertainty in a model-based approach, we develop a novel Bayesian inference method to simultaneously estimate the evolutionary history and phylogenetic signal from molecular sequence data and repeated multivariate traits. Our approach builds upon a phylogenetic diffusion framework that model continuous trait evolution as a Brownian motion process and incorporates Pagel’s λ transformation parameter to estimate dependence among traits. We provide a computationally efficient inference implementation in the BEAST software package. We evaluate the synthetic performance of the Bayesian estimator of phylogenetic signal against standard estimators, and demonstrate the use of our coherent framework to address several virus-host evolutionary questions, including virulence heritability for HIV, antigenic evolution in influenza and HIV, and Drosophila sensitivity to sigma virus infection. Finally, we discuss model extensions that will make useful contributions to our flexible framework for simultaneously studying sequence and trait evolution. PMID:25780554

  9. A method for discrimination of noise and EMG signal regions recorded during rhythmic behaviors.

    PubMed

    Ying, Rex; Wall, Christine E

    2016-12-08

    Analyses of muscular activity during rhythmic behaviors provide critical data for biomechanical studies. Electrical potentials measured from muscles using electromyography (EMG) require discrimination of noise regions as the first step in analysis. An experienced analyst can accurately identify the onset and offset of EMG but this process takes hours to analyze a short (10-15s) record of rhythmic EMG bursts. Existing computational techniques reduce this time but have limitations. These include a universal threshold for delimiting noise regions (i.e., a single signal value for identifying the EMG signal onset and offset), pre-processing using wide time intervals that dampen sensitivity for EMG signal characteristics, poor performance when a low frequency component (e.g., DC offset) is present, and high computational complexity leading to lack of time efficiency. We present a new statistical method and MATLAB script (EMG-Extractor) that includes an adaptive algorithm to discriminate noise regions from EMG that avoids these limitations and allows for multi-channel datasets to be processed. We evaluate the EMG-Extractor with EMG data on mammalian jaw-adductor muscles during mastication, a rhythmic behavior typified by low amplitude onsets/offsets and complex signal pattern. The EMG-Extractor consistently and accurately distinguishes noise from EMG in a manner similar to that of an experienced analyst. It outputs the raw EMG signal region in a form ready for further analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. A generalized K statistic for estimating phylogenetic signal from shape and other high-dimensional multivariate data.

    PubMed

    Adams, Dean C

    2014-09-01

    Phylogenetic signal is the tendency for closely related species to display similar trait values due to their common ancestry. Several methods have been developed for quantifying phylogenetic signal in univariate traits and for sets of traits treated simultaneously, and the statistical properties of these approaches have been extensively studied. However, methods for assessing phylogenetic signal in high-dimensional multivariate traits like shape are less well developed, and their statistical performance is not well characterized. In this article, I describe a generalization of the K statistic of Blomberg et al. that is useful for quantifying and evaluating phylogenetic signal in highly dimensional multivariate data. The method (K(mult)) is found from the equivalency between statistical methods based on covariance matrices and those based on distance matrices. Using computer simulations based on Brownian motion, I demonstrate that the expected value of K(mult) remains at 1.0 as trait variation among species is increased or decreased, and as the number of trait dimensions is increased. By contrast, estimates of phylogenetic signal found with a squared-change parsimony procedure for multivariate data change with increasing trait variation among species and with increasing numbers of trait dimensions, confounding biological interpretations. I also evaluate the statistical performance of hypothesis testing procedures based on K(mult) and find that the method displays appropriate Type I error and high statistical power for detecting phylogenetic signal in high-dimensional data. Statistical properties of K(mult) were consistent for simulations using bifurcating and random phylogenies, for simulations using different numbers of species, for simulations that varied the number of trait dimensions, and for different underlying models of trait covariance structure. Overall these findings demonstrate that K(mult) provides a useful means of evaluating phylogenetic signal in high-dimensional multivariate traits. Finally, I illustrate the utility of the new approach by evaluating the strength of phylogenetic signal for head shape in a lineage of Plethodon salamanders. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Identification and classification of failure modes in laminated composites by using a multivariate statistical analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Baccar, D.; Söffker, D.

    2017-11-01

    Acoustic Emission (AE) is a suitable method to monitor the health of composite structures in real-time. However, AE-based failure mode identification and classification are still complex to apply due to the fact that AE waves are generally released simultaneously from all AE-emitting damage sources. Hence, the use of advanced signal processing techniques in combination with pattern recognition approaches is required. In this paper, AE signals generated from laminated carbon fiber reinforced polymer (CFRP) subjected to indentation test are examined and analyzed. A new pattern recognition approach involving a number of processing steps able to be implemented in real-time is developed. Unlike common classification approaches, here only CWT coefficients are extracted as relevant features. Firstly, Continuous Wavelet Transform (CWT) is applied to the AE signals. Furthermore, dimensionality reduction process using Principal Component Analysis (PCA) is carried out on the coefficient matrices. The PCA-based feature distribution is analyzed using Kernel Density Estimation (KDE) allowing the determination of a specific pattern for each fault-specific AE signal. Moreover, waveform and frequency content of AE signals are in depth examined and compared with fundamental assumptions reported in this field. A correlation between the identified patterns and failure modes is achieved. The introduced method improves the damage classification and can be used as a non-destructive evaluation tool.

  12. Minimal Network Topologies for Signal Processing during Collective Cell Chemotaxis.

    PubMed

    Yue, Haicen; Camley, Brian A; Rappel, Wouter-Jan

    2018-06-19

    Cell-cell communication plays an important role in collective cell migration. However, it remains unclear how cells in a group cooperatively process external signals to determine the group's direction of motion. Although the topology of signaling pathways is vitally important in single-cell chemotaxis, the signaling topology for collective chemotaxis has not been systematically studied. Here, we combine mathematical analysis and simulations to find minimal network topologies for multicellular signal processing in collective chemotaxis. We focus on border cell cluster chemotaxis in the Drosophila egg chamber, in which responses to several experimental perturbations of the signaling network are known. Our minimal signaling network includes only four elements: a chemoattractant, the protein Rac (indicating cell activation), cell protrusion, and a hypothesized global factor responsible for cell-cell interaction. Experimental data on cell protrusion statistics allows us to systematically narrow the number of possible topologies from more than 40,000,000 to only six minimal topologies with six interactions between the four elements. This analysis does not require a specific functional form of the interactions, and only qualitative features are needed; it is thus robust to many modeling choices. Simulations of a stochastic biochemical model of border cell chemotaxis show that the qualitative selection procedure accurately determines which topologies are consistent with the experiment. We fit our model for all six proposed topologies; each produces results that are consistent with all experimentally available data. Finally, we suggest experiments to further discriminate possible pathway topologies. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. The invariant statistical rule of aerosol scattering pulse signal modulated by random noise

    NASA Astrophysics Data System (ADS)

    Yan, Zhen-gang; Bian, Bao-Min; Yang, Juan; Peng, Gang; Li, Zhen-hua

    2010-11-01

    A model of the random background noise acting on particle signals is established to study the impact of the background noise of the photoelectric sensor in the laser airborne particle counter on the statistical character of the aerosol scattering pulse signals. The results show that the noises broaden the statistical distribution of the particle's measurement. Further numerical research shows that the output of the signal amplitude still has the same distribution when the airborne particle with the lognormal distribution was modulated by random noise which has lognormal distribution. Namely it follows the statistics law of invariance. Based on this model, the background noise of photoelectric sensor and the counting distributions of random signal for aerosol's scattering pulse are obtained and analyzed by using a high-speed data acquisition card PCI-9812. It is found that the experiment results and simulation results are well consistent.

  14. Decadal power in land air temperatures: Is it statistically significant?

    NASA Astrophysics Data System (ADS)

    Thejll, Peter A.

    2001-12-01

    The geographical distribution and properties of the well-known 10-11 year signal in terrestrial temperature records is investigated. By analyzing the Global Historical Climate Network data for surface air temperatures we verify that the signal is strongest in North America and is similar in nature to that reported earlier by R. G. Currie. The decadal signal is statistically significant for individual stations, but it is not possible to show that the signal is statistically significant globally, using strict tests. In North America, during the twentieth century, the decadal variability in the solar activity cycle is associated with the decadal part of the North Atlantic Oscillation index series in such a way that both of these signals correspond to the same spatial pattern of cooling and warming. A method for testing statistical results with Monte Carlo trials on data fields with specified temporal structure and specific spatial correlation retained is presented.

  15. Information theoretical assessment of digital imaging systems

    NASA Technical Reports Server (NTRS)

    John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.

    1990-01-01

    The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.

  16. Information theoretical assessment of digital imaging systems

    NASA Astrophysics Data System (ADS)

    John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.

    1990-10-01

    The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.

  17. Intrinsic noise analysis and stochastic simulation on transforming growth factor beta signal pathway

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Ouyang, Qi

    2010-10-01

    A typical biological cell lives in a small volume at room temperature; the noise effect on the cell signal transduction pathway may play an important role in its dynamics. Here, using the transforming growth factor-β signal transduction pathway as an example, we report our stochastic simulations of the dynamics of the pathway and introduce a linear noise approximation method to calculate the transient intrinsic noise of pathway components. We compare the numerical solutions of the linear noise approximation with the statistic results of chemical Langevin equations, and find that they are quantitatively in agreement with the other. When transforming growth factor-β dose decreases to a low level, the time evolution of noise fluctuation of nuclear Smad2—Smad4 complex indicates the abnormal enhancement in the transient signal activation process.

  18. A signal processing framework for simultaneous detection of multiple environmental contaminants

    NASA Astrophysics Data System (ADS)

    Chakraborty, Subhadeep; Manahan, Michael P.; Mench, Matthew M.

    2013-11-01

    The possibility of large-scale attacks using chemical warfare agents (CWAs) has exposed the critical need for fundamental research enabling the reliable, unambiguous and early detection of trace CWAs and toxic industrial chemicals. This paper presents a unique approach for the identification and classification of simultaneously present multiple environmental contaminants by perturbing an electrochemical (EC) sensor with an oscillating potential for the extraction of statistically rich information from the current response. The dynamic response, being a function of the degree and mechanism of contamination, is then processed with a symbolic dynamic filter for the extraction of representative patterns, which are then classified using a trained neural network. The approach presented in this paper promises to extend the sensing power and sensitivity of these EC sensors by augmenting and complementing sensor technology with state-of-the-art embedded real-time signal processing capabilities.

  19. Classical-processing and quantum-processing signal separation methods for qubit uncoupling

    NASA Astrophysics Data System (ADS)

    Deville, Yannick; Deville, Alain

    2012-12-01

    The Blind Source Separation problem consists in estimating a set of unknown source signals from their measured combinations. It was only investigated in a non-quantum framework up to now. We propose its first quantum extensions. We thus introduce the Quantum Source Separation field, investigating both its blind and non-blind configurations. More precisely, we show how to retrieve individual quantum bits (qubits) only from the global state resulting from their undesired coupling. We consider cylindrical-symmetry Heisenberg coupling, which e.g. occurs when two electron spins interact through exchange. We first propose several qubit uncoupling methods which typically measure repeatedly the coupled quantum states resulting from individual qubits preparations, and which then statistically process the classical data provided by these measurements. Numerical tests prove the effectiveness of these methods. We then derive a combination of quantum gates for performing qubit uncoupling, thus avoiding repeated qubit preparations and irreversible measurements.

  20. The cardiorespiratory interaction: a nonlinear stochastic model and its synchronization properties

    NASA Astrophysics Data System (ADS)

    Bahraminasab, A.; Kenwright, D.; Stefanovska, A.; McClintock, P. V. E.

    2007-06-01

    We address the problem of interactions between the phase of cardiac and respiration oscillatory components. The coupling between these two quantities is experimentally investigated by the theory of stochastic Markovian processes. The so-called Markov analysis allows us to derive nonlinear stochastic equations for the reconstruction of the cardiorespiratory signals. The properties of these equations provide interesting new insights into the strength and direction of coupling which enable us to divide the couplings to two parts: deterministic and stochastic. It is shown that the synchronization behaviors of the reconstructed signals are statistically identical with original one.

  1. Automated process control for plasma etching

    NASA Astrophysics Data System (ADS)

    McGeown, Margaret; Arshak, Khalil I.; Murphy, Eamonn

    1992-06-01

    This paper discusses the development and implementation of a rule-based system which assists in providing automated process control for plasma etching. The heart of the system is to establish a correspondence between a particular data pattern -- sensor or data signals -- and one or more modes of failure, i.e., a data-driven monitoring approach. The objective of this rule based system, PLETCHSY, is to create a program combining statistical process control (SPC) and fault diagnosis to help control a manufacturing process which varies over time. This can be achieved by building a process control system (PCS) with the following characteristics. A facility to monitor the performance of the process by obtaining and analyzing the data relating to the appropriate process variables. Process sensor/status signals are input into an SPC module. If trends are present, the SPC module outputs the last seven control points, a pattern which is represented by either regression or scoring. The pattern is passed to the rule-based module. When the rule-based system recognizes a pattern, it starts the diagnostic process using the pattern. If the process is considered to be going out of control, advice is provided about actions which should be taken to bring the process back into control.

  2. Acoustic emission characteristics of a single cylinder diesel generator at various loads and with a failing injector

    NASA Astrophysics Data System (ADS)

    Dykas, Brian; Harris, James

    2017-09-01

    Acoustic emission sensing techniques have been applied in recent years to dynamic machinery with varying degrees of success in diagnosing various component faults and distinguishing between operating conditions. This work explores basic properties of acoustic emission signals measured on a small single cylinder diesel engine in a laboratory setting. As reported in other works in the open literature, the measured acoustic emission on the engine is mostly continuous mode and individual burst events are generally not readily identifiable. Therefore, the AE are processed into the local (instantaneous) root mean square (rms) value of the signal which is averaged over many cycles to obtain a mean rms AE in the crank angle domain. Crank-resolved spectral representation of the AE is also given but rigorous investigation of the AE spectral qualities is left to future study. Cycle-to-cycle statistical dispersion of the AE signal is considered to highlight highly variable engine processes. Engine speed was held constant but load conditions are varied to investigate AE signal sensitivity to operating condition. Furthermore, during the course of testing the fuel injector developed a fault and acoustic emission signals were captured and several signal attributes were successful in distinguishing this altered condition. The sampling and use of instantaneous rms acoustic emission signal demonstrated promise for non-intrusive and economical change detection of engine injection, combustion and valve events.

  3. Flies and humans share a motion estimation strategy that exploits natural scene statistics

    PubMed Central

    Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.

    2014-01-01

    Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225

  4. Probabilistic models in human sensorimotor control

    PubMed Central

    Wolpert, Daniel M.

    2009-01-01

    Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: Bayesian statistics and decision theory. Here we review Bayesian statistics and show how it applies to estimating the state of the world and our own body. Recent results suggest that when learning novel tasks we are able to learn the statistical properties of both the world and our own sensory apparatus so as to perform estimation using Bayesian statistics. We review studies which suggest that humans can combine multiple sources of information to form maximum likelihood estimates, can incorporate prior beliefs about possible states of the world so as to generate maximum a posteriori estimates and can use Kalman filter-based processes to estimate time-varying states. Finally, we review Bayesian decision theory in motor control and how the central nervous system processes errors to determine loss functions and optimal actions. We review results that suggest we plan movements based on statistics of our actions that result from signal-dependent noise on our motor outputs. Taken together these studies provide a statistical framework for how the motor system performs in the presence of uncertainty. PMID:17628731

  5. ECG Identification System Using Neural Network with Global and Local Features

    ERIC Educational Resources Information Center

    Tseng, Kuo-Kun; Lee, Dachao; Chen, Charles

    2016-01-01

    This paper proposes a human identification system via extracted electrocardiogram (ECG) signals. Two hierarchical classification structures based on global shape feature and local statistical feature is used to extract ECG signals. Global shape feature represents the outline information of ECG signals and local statistical feature extracts the…

  6. Hammerstein system represention of financial volatility processes

    NASA Astrophysics Data System (ADS)

    Capobianco, E.

    2002-05-01

    We show new modeling aspects of stock return volatility processes, by first representing them through Hammerstein Systems, and by then approximating the observed and transformed dynamics with wavelet-based atomic dictionaries. We thus propose an hybrid statistical methodology for volatility approximation and non-parametric estimation, and aim to use the information embedded in a bank of volatility sources obtained by decomposing the observed signal with multiresolution techniques. Scale dependent information refers both to market activity inherent to different temporally aggregated trading horizons, and to a variable degree of sparsity in representing the signal. A decomposition of the expansion coefficients in least dependent coordinates is then implemented through Independent Component Analysis. Based on the described steps, the features of volatility can be more effectively detected through global and greedy algorithms.

  7. Cassini finds molecular hydrogen in the Enceladus plume: Evidence for hydrothermal processes

    NASA Astrophysics Data System (ADS)

    Waite, J. Hunter; Glein, Christopher R.; Perryman, Rebecca S.; Teolis, Ben D.; Magee, Brian A.; Miller, Greg; Grimes, Jacob; Perry, Mark E.; Miller, Kelly E.; Bouquet, Alexis; Lunine, Jonathan I.; Brockwell, Tim; Bolton, Scott J.

    2017-04-01

    Saturn’s moon Enceladus has an ice-covered ocean; a plume of material erupts from cracks in the ice. The plume contains chemical signatures of water-rock interaction between the ocean and a rocky core. We used the Ion Neutral Mass Spectrometer onboard the Cassini spacecraft to detect molecular hydrogen in the plume. By using the instrument’s open-source mode, background processes of hydrogen production in the instrument were minimized and quantified, enabling the identification of a statistically significant signal of hydrogen native to Enceladus. We find that the most plausible source of this hydrogen is ongoing hydrothermal reactions of rock containing reduced minerals and organic materials. The relatively high hydrogen abundance in the plume signals thermodynamic disequilibrium that favors the formation of methane from CO2 in Enceladus’ ocean.

  8. Concepts for on board satellite image registration. Volume 4: Impact of data set selection on satellite on board signal processing

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Aanstoos, J. V.; Snyder, W. E.

    1982-01-01

    The NASA NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. This volume addresses the impact of data set selection on data formatting required for efficient telemetering of the acquired satellite sensor data. More specifically, the FILE algorithm developed by Martin-Marietta provides a means for the determination of those pixels from the data stream effects an improvement in the achievable system throughput. It will be seen that based on the lack of statistical stationarity in cloud cover, spatial distribution periods exist where data acquisition rates exceed the throughput capability. The study therefore addresses various approaches to data compression and truncation as applicable to this sensor mission.

  9. Fluorescence lifetime measurements in flow cytometry

    NASA Astrophysics Data System (ADS)

    Beisker, Wolfgang; Klocke, Axel

    1997-05-01

    Fluorescence lifetime measurements provide insights int eh dynamic and structural properties of dyes and their micro- environment. The implementation of fluorescence lifetime measurements in flow cytometric systems allows to monitor large cell and particle populations with high statistical significance. In our system, a modulated laser beam is used for excitation and the phase shift of the fluorescence signal recorded with a fast computer controlled digital oscilloscope is processed digitally to determine the phase shift with respect to a reference beam by fast fourier transform. Total fluorescence intensity as well as other parameters can be determined simultaneously from the same fluorescence signal. We use the epi-illumination design to allow the use of high numerical apertures to collect as much light as possible to ensure detection of even weak fluorescence. Data storage and processing is done comparable to slit-scan flow cytometric data using data analysis system. The results are stored, displayed, combined with other parameters and analyzed as normal listmode data. In our report we discuss carefully the signal to noise ratio for analog and digital processed lifetime signals to evaluate the theoretical minimum fluorescence intensity for lifetime measurements. Applications to be presented include DNA staining, parameters of cell functions as well as different applications in non-mammalian cells such as algae.

  10. Drug Adverse Event Detection in Health Plan Data Using the Gamma Poisson Shrinker and Comparison to the Tree-based Scan Statistic

    PubMed Central

    Brown, Jeffrey S.; Petronis, Kenneth R.; Bate, Andrew; Zhang, Fang; Dashevsky, Inna; Kulldorff, Martin; Avery, Taliser R.; Davis, Robert L.; Chan, K. Arnold; Andrade, Susan E.; Boudreau, Denise; Gunter, Margaret J.; Herrinton, Lisa; Pawloski, Pamala A.; Raebel, Marsha A.; Roblin, Douglas; Smith, David; Reynolds, Robert

    2013-01-01

    Background: Drug adverse event (AE) signal detection using the Gamma Poisson Shrinker (GPS) is commonly applied in spontaneous reporting. AE signal detection using large observational health plan databases can expand medication safety surveillance. Methods: Using data from nine health plans, we conducted a pilot study to evaluate the implementation and findings of the GPS approach for two antifungal drugs, terbinafine and itraconazole, and two diabetes drugs, pioglitazone and rosiglitazone. We evaluated 1676 diagnosis codes grouped into 183 different clinical concepts and four levels of granularity. Several signaling thresholds were assessed. GPS results were compared to findings from a companion study using the identical analytic dataset but an alternative statistical method—the tree-based scan statistic (TreeScan). Results: We identified 71 statistical signals across two signaling thresholds and two methods, including closely-related signals of overlapping diagnosis definitions. Initial review found that most signals represented known adverse drug reactions or confounding. About 31% of signals met the highest signaling threshold. Conclusions: The GPS method was successfully applied to observational health plan data in a distributed data environment as a drug safety data mining method. There was substantial concordance between the GPS and TreeScan approaches. Key method implementation decisions relate to defining exposures and outcomes and informed choice of signaling thresholds. PMID:24300404

  11. A new similarity index for nonlinear signal analysis based on local extrema patterns

    NASA Astrophysics Data System (ADS)

    Niknazar, Hamid; Motie Nasrabadi, Ali; Shamsollahi, Mohammad Bagher

    2018-02-01

    Common similarity measures of time domain signals such as cross-correlation and Symbolic Aggregate approximation (SAX) are not appropriate for nonlinear signal analysis. This is because of the high sensitivity of nonlinear systems to initial points. Therefore, a similarity measure for nonlinear signal analysis must be invariant to initial points and quantify the similarity by considering the main dynamics of signals. The statistical behavior of local extrema (SBLE) method was previously proposed to address this problem. The SBLE similarity index uses quantized amplitudes of local extrema to quantify the dynamical similarity of signals by considering patterns of sequential local extrema. By adding time information of local extrema as well as fuzzifying quantized values, this work proposes a new similarity index for nonlinear and long-term signal analysis, which extends the SBLE method. These new features provide more information about signals and reduce noise sensitivity by fuzzifying them. A number of practical tests were performed to demonstrate the ability of the method in nonlinear signal clustering and classification on synthetic data. In addition, epileptic seizure detection based on electroencephalography (EEG) signal processing was done by the proposed similarity to feature the potentials of the method as a real-world application tool.

  12. Application of a Statistical Linear Time-Varying System Model of High Grazing Angle Sea Clutter for Computing Interference Power

    DTIC Science & Technology

    2017-12-08

    1088793. 3. R. Price and P. E. Green, Jr., “Signal processing in radar astronomy – communication via fluctuating multipath media,” rept. 234, MIT...Lincoln Laboratory (October 1960). 4. P. E. Green, Jr., “Radar astronomy measurement techniques,” rept. 282, MIT Lincoln Laboratory (December 1962). 5. A

  13. Design of radar receivers

    NASA Astrophysics Data System (ADS)

    Sokolov, M. A.

    This handbook treats the design and analysis of of pulsed radar receivers, with emphasis on elements (especially IC elements) that implement optimal and suboptimal algorithms. The design methodology is developed from the viewpoint of statistical communications theory. Particular consideration is given to the synthesis of single-channel and multichannel detectors, the design of analog and digital signal-processing devices, and the analysis of IF amplifiers.

  14. Statistical strategy for anisotropic adventitia modelling in IVUS.

    PubMed

    Gil, Debora; Hernández, Aura; Rodriguez, Oriol; Mauri, Josepa; Radeva, Petia

    2006-06-01

    Vessel plaque assessment by analysis of intravascular ultrasound sequences is a useful tool for cardiac disease diagnosis and intervention. Manual detection of luminal (inner) and media-adventitia (external) vessel borders is the main activity of physicians in the process of lumen narrowing (plaque) quantification. Difficult definition of vessel border descriptors, as well as, shades, artifacts, and blurred signal response due to ultrasound physical properties trouble automated adventitia segmentation. In order to efficiently approach such a complex problem, we propose blending advanced anisotropic filtering operators and statistical classification techniques into a vessel border modelling strategy. Our systematic statistical analysis shows that the reported adventitia detection achieves an accuracy in the range of interobserver variability regardless of plaque nature, vessel geometry, and incomplete vessel borders.

  15. Statistical analysis of RHIC beam position monitors performance

    NASA Astrophysics Data System (ADS)

    Calaga, R.; Tomás, R.

    2004-04-01

    A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  16. Computer algorithm for analyzing and processing borehole strainmeter data

    USGS Publications Warehouse

    Langbein, John O.

    2010-01-01

    The newly installed Plate Boundary Observatory (PBO) strainmeters record signals from tectonic activity, Earth tides, and atmospheric pressure. Important information about tectonic processes may occur at amplitudes at and below tidal strains and pressure loading. If incorrect assumptions are made regarding the background noise in the strain data, then the estimates of tectonic signal amplitudes may be incorrect. Furthermore, the use of simplifying assumptions that data are uncorrelated can lead to incorrect results and pressure loading and tides may not be completely removed from the raw data. Instead, any algorithm used to process strainmeter data must incorporate the strong temporal correlations that are inherent with these data. The technique described here uses least squares but employs data covariance that describes the temporal correlation of strainmeter data. There are several advantages to this method since many parameters are estimated simultaneously. These parameters include: (1) functional terms that describe the underlying error model, (2) the tidal terms, (3) the pressure loading term(s), (4) amplitudes of offsets, either those from earthquakes or from the instrument, (5) rate and changes in rate, and (6) the amplitudes and time constants of either logarithmic or exponential curves that can characterize postseismic deformation or diffusion of fluids near the strainmeter. With the proper error model, realistic estimates of the standard errors of the various parameters are obtained; this is especially critical in determining the statistical significance of a suspected, tectonic strain signal. The program also provides a method of tracking the various adjustments required to process strainmeter data. In addition, the program provides several plots to assist with identifying either tectonic signals or other signals that may need to be removed before any geophysical signal can be identified.

  17. Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2013-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  18. Turbulent Statistics from Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2012-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  19. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  20. A robust nonlinear filter for image restoration.

    PubMed

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.

  1. Data-driven coarse graining in action: Modeling and prediction of complex systems

    NASA Astrophysics Data System (ADS)

    Krumscheid, S.; Pradas, M.; Pavliotis, G. A.; Kalliadasis, S.

    2015-10-01

    In many physical, technological, social, and economic applications, one is commonly faced with the task of estimating statistical properties, such as mean first passage times of a temporal continuous process, from empirical data (experimental observations). Typically, however, an accurate and reliable estimation of such properties directly from the data alone is not possible as the time series is often too short, or the particular phenomenon of interest is only rarely observed. We propose here a theoretical-computational framework which provides us with a systematic and rational estimation of statistical quantities of a given temporal process, such as waiting times between subsequent bursts of activity in intermittent signals. Our framework is illustrated with applications from real-world data sets, ranging from marine biology to paleoclimatic data.

  2. Experimental Investigations of Non-Stationary Properties In Radiometer Receivers Using Measurements of Multiple Calibration References

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Lang, Roger; Zhang, Zhao-Nan; Zacharias, David; Krebs, Carolyn A. (Technical Monitor)

    2002-01-01

    Radiometers must be periodically calibrated because the receiver response fluctuates. Many techniques exist to correct for the time varying response of a radiometer receiver. An analytical technique has been developed that uses generalized least squares regression (LSR) to predict the performance of a wide variety of calibration algorithms. The total measurement uncertainty including the uncertainty of the calibration can be computed using LSR. The uncertainties of the calibration samples used in the regression are based upon treating the receiver fluctuations as non-stationary processes. Signals originating from the different sources of emission are treated as simultaneously existing random processes. Thus, the radiometer output is a series of samples obtained from these random processes. The samples are treated as random variables but because the underlying processes are non-stationary the statistics of the samples are treated as non-stationary. The statistics of the calibration samples depend upon the time for which the samples are to be applied. The statistics of the random variables are equated to the mean statistics of the non-stationary processes over the interval defined by the time of calibration sample and when it is applied. This analysis opens the opportunity for experimental investigation into the underlying properties of receiver non stationarity through the use of multiple calibration references. In this presentation we will discuss the application of LSR to the analysis of various calibration algorithms, requirements for experimental verification of the theory, and preliminary results from analyzing experiment measurements.

  3. Ionospheric scintillation studies

    NASA Technical Reports Server (NTRS)

    Rino, C. L.; Freemouw, E. J.

    1973-01-01

    The diffracted field of a monochromatic plane wave was characterized by two complex correlation functions. For a Gaussian complex field, these quantities suffice to completely define the statistics of the field. Thus, one can in principle calculate the statistics of any measurable quantity in terms of the model parameters. The best data fits were achieved for intensity statistics derived under the Gaussian statistics hypothesis. The signal structure that achieved the best fit was nearly invariant with scintillation level and irregularity source (ionosphere or solar wind). It was characterized by the fact that more than 80% of the scattered signal power is in phase quadrature with the undeviated or coherent signal component. Thus, the Gaussian-statistics hypothesis is both convenient and accurate for channel modeling work.

  4. Statistical significance estimation of a signal within the GooFit framework on GPUs

    NASA Astrophysics Data System (ADS)

    Cristella, Leonardo; Di Florio, Adriano; Pompili, Alexis

    2017-03-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  5. Performance studies of GooFit on GPUs vs RooFit on CPUs while estimating the statistical significance of a new physical signal

    NASA Astrophysics Data System (ADS)

    Di Florio, Adriano

    2017-10-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B + → J/ψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  6. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    NASA Astrophysics Data System (ADS)

    D'Amico, Giuseppe; Amodeo, Aldo; Mattis, Ina; Freudenthaler, Volker; Pappalardo, Gelsomina

    2016-02-01

    In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  7. Single- and multiple-pulse noncoherent detection statistics associated with partially developed speckle.

    PubMed

    Osche, G R

    2000-08-20

    Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.

  8. Tables of square-law signal detection statistics for Hann spectra with 50 percent overlap

    NASA Technical Reports Server (NTRS)

    Deans, Stanley R.; Cullers, D. Kent

    1991-01-01

    The Search for Extraterrestrial Intelligence, currently being planned by NASA, will require that an enormous amount of data be analyzed in real time by special purpose hardware. It is expected that overlapped Hann data windows will play an important role in this analysis. In order to understand the statistical implication of this approach, it has been necessary to compute detection statistics for overlapped Hann spectra. Tables of signal detection statistics are given for false alarm rates from 10(exp -14) to 10(exp -1) and signal detection probabilities from 0.50 to 0.99; the number of computed spectra ranges from 4 to 2000.

  9. Fractal properties of background noise and target signal enhancement using CSEM data

    NASA Astrophysics Data System (ADS)

    Benavides, Alfonso; Everett, Mark E.; Pierce, Carl; Nguyen, Cam

    2003-09-01

    Controlled-source electromagnetic (CSEM) spatial profiles and 2-D conductivity maps were obtained on the Brazos Valley, TX floodplain to study the fractal statistics of geological signals and effects of man-made conductive targets using Geonics EM34, EM31 and EM63. Using target-free areas, a consistent power-law power spectrum (|A(k)| ~ k ^-β) for the profiles was found with β values typical of fractional Brownian motion (fBm). This means that the spatial variation of conductivity does not correspond to Gaussian statistics, where there are spatial correlations at different scales. The presence of targets tends to flatten the power-law power spectrum (PS) at small wavenumbers. Detection and localization of targets can be achieved using short-time Fourier transform (STFT). The presence of targets is enhanced because the signal energy is spread to higher wavenumbers (small scale numbers) in the positions occupied by the targets. In the case of poor spatial sampling or small amount of data, the information available from the power spectrum is not enough to separate spatial correlations from target signatures. Advantages are gained by using the spatial correlations of the fBm in order to reject the background response, and to enhance the signals from highly conductive targets. This approach was tested for the EM31 using a pre-processing step that combines apparent conductivity readings from two perpendicular transmitter-receiver orientations at each station. The response obtained using time-domain CSEM is influence to a lesser degree by geological noise and the target response can be processed to recover target features. The homotopy method is proposed to solve the inverse problem using a set of possible target models and a dynamic library of responses used to optimize the starting model.

  10. On comprehensive recovery of an aftershock sequence with cross correlation

    NASA Astrophysics Data System (ADS)

    Kitov, I.; Bobrov, D.; Coyne, J.; Turyomurugyendo, G.

    2012-04-01

    We have introduced cross correlation between seismic waveforms as a technique for signal detection and automatic event building at the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization. The intuition behind signal detection is simple - small and mid-sized seismic events close in space should produce similar signals at the same seismic stations. Equivalently, these signals have to be characterized by a high cross correlation coefficient. For array stations with many individual sensors distributed over a large area, signals from events at distances beyond, say, 50 km, are subject to destructive interference when cross correlated due to changing time delays between various channels. Thus, any cross correlation coefficient above some predefined threshold can be considered as a signature of a valid signal. With a dense grid of master events (spacing between adjacent masters between 20 km and 50 km corresponds to the statistically estimated correlation distance) with high quality (signal-to-noise ratio above 10) template waveforms at primary array stations of the International Monitoring System one can detect signals from and then build natural and manmade seismic events close to the master ones. The use of cross correlation allows detecting smaller signals (sometimes below noise level) than provided by the current IDC detecting techniques. As a result it is possible to automatically build from 50% to 100% more valid seismic events than included in the Reviewed Event Bulletin (REB). We have developed a tentative pipeline for automatic processing at the IDC. It includes three major stages. Firstly, we calculate cross correlation coefficient for a given master and continuous waveforms at the same stations and carry out signal detection as based on the statistical behavior of signal-to-noise ratio of the cross correlation coefficient. Secondly, a thorough screening is performed for all obtained signals using f-k analysis and F-statistics as applied to the cross-correlation traces at individual channels of all included array stations. Thirdly, local (i.e. confined to the correlation distance around the master event) association of origin times of all qualified signals is fulfilled. These origin times are calculated from the arrival times of these signals, which are reduced to the origin times by the travel times from the master event. An aftershock sequence of a mid-size earthquake is an ideal case to test cross correlation techniques for autiomatic event building. All events should be close to the mainshock and occur within several days. Here we analyse the aftershock sequence of an earthquake in the North Atlantic Ocean with mb(IDC)=4.79. The REB includes 38 events at distances less than 150 km from the mainshock. Our ultimate goal is to excersice the complete iterative procedure to find all possible aftershocks. We start with the mainshock and recover ten aftershocks with the largest number of stations to produce an initial set of master events with the highest quality templates. Then we find all aftershocks in the REB and many additional events, which were not originally found by the IDC. Using all events found after the first iteration as master events we find new events, which are also used in the next iteration. The iterative process stops when no new events can be found. In that sense the final set of aftershocks obtained with cross correlation is a comprehensive one.

  11. A new method to detect transitory signatures and local time/space variability structures in the climate system: the scale-dependent correlation analysis

    NASA Astrophysics Data System (ADS)

    Rodó, Xavier; Rodríguez-Arias, Miquel-Àngel

    2006-10-01

    The study of transitory signals and local variability structures in both/either time and space and their role as sources of climatic memory, is an important but often neglected topic in climate research despite its obvious importance and extensive coverage in the literature. Transitory signals arise either from non-linearities, in the climate system, transitory atmosphere-ocean couplings, and other processes in the climate system evolving after a critical threshold is crossed. These temporary interactions that, though intense, may not last long, can be responsible for a large amount of unexplained variability but are normally considered of limited relevance and often, discarded. With most of the current techniques at hand these typology of signatures are difficult to isolate because the low signal-to-noise ratio in midlatitudes, the limited recurrence of the transitory signals during a customary interval of data considered. Also, there is often a serious problem arising from the smoothing of local or transitory processes if statistical techniques are applied, that consider all the length of data available, rather than taking into account the size of the specific variability structure under investigation. Scale-dependent correlation (SDC) analysis is a new statistical method capable of highlighting the presence of transitory processes, these former being understood as temporary significant lag-dependent autocovariance in a single series, or covariance structures between two series. This approach, therefore, complements other approaches such as those resulting from the families of wavelet analysis, singular-spectrum analysis and recurrence plots. A main feature of SDC is its high-performance for short time series, its ability to characterize phase-relationships and thresholds in the bivariate domain. Ultimately, SDC helps tracking short-lagged relationships among processes that locally or temporarily couple and uncouple. The use of SDC is illustrated in the present paper by means of some synthetic time-series examples of increasing complexity, and it is compared with wavelet analysis in order to provide a well-known reference of its capabilities. A comparison between SDC and companion techniques is also addressed and results are exemplified for the specific case of some relevant El Niño-Southern Oscillation teleconnections.

  12. How does signal fade on photo-stimulable storage phosphor imaging plates when scanned with a delay and what is the effect on image quality?

    PubMed

    Ang, Dan B; Angelopoulos, Christos; Katz, Jerald O

    2006-11-01

    The goals of this in vitro study were to determine the effect of signal fading of DenOptix photo-stimulable storage phosphor imaging plates scanned with a delay and to determine the effect on the diagnostic quality of the image. In addition, we sought to correlate signal fading with image spatial resolution and average pixel intensity values. Forty-eight images were obtained of a test specimen apparatus and scanned at 6 delayed time intervals: immediately scanned, 1 hour, 8 hours, 24 hours, 72 hours, and 168 hours. Six general dentists using Vixwin2000 software performed a measuring task to determine the location of an endodontic file tip and root apex. One-way ANOVA with repeated measures was used to determine the effect of signal fading (delayed scan time) on diagnostic image quality and average pixel intensity value. There was no statistically significant difference in diagnostic image quality resulting from signal fading. No difference was observed in spatial resolution of the images. There was a statistically significant difference in the pixel intensity analysis of an 8-step aluminum wedge between immediate scanning and 24-hour delayed scan time. There was an effect of delayed scanning on the average pixel intensity value. However, there was no effect on image quality and raters' ability to perform a clinical identification task. Proprietary software of the DenOptix digital imaging system demonstrates an excellent ability to process a delayed scan time signal and create an image of diagnostic quality.

  13. Selection vector filter framework

    NASA Astrophysics Data System (ADS)

    Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.

    2003-10-01

    We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.

  14. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  15. Signal and noise modeling in confocal laser scanning fluorescence microscopy.

    PubMed

    Herberich, Gerlind; Windoffer, Reinhard; Leube, Rudolf E; Aach, Til

    2012-01-01

    Fluorescence confocal laser scanning microscopy (CLSM) has revolutionized imaging of subcellular structures in biomedical research by enabling the acquisition of 3D time-series of fluorescently-tagged proteins in living cells, hence forming the basis for an automated quantification of their morphological and dynamic characteristics. Due to the inherently weak fluorescence, CLSM images exhibit a low SNR. We present a novel model for the transfer of signal and noise in CLSM that is both theoretically sound as well as corroborated by a rigorous analysis of the pixel intensity statistics via measurement of the 3D noise power spectra, signal-dependence and distribution. Our model provides a better fit to the data than previously proposed models. Further, it forms the basis for (i) the simulation of the CLSM imaging process indispensable for the quantitative evaluation of CLSM image analysis algorithms, (ii) the application of Poisson denoising algorithms and (iii) the reconstruction of the fluorescence signal.

  16. Predictive Coding: A Fresh View of Inhibition in the Retina

    NASA Astrophysics Data System (ADS)

    Srinivasan, M. V.; Laughlin, S. B.; Dubs, A.

    1982-11-01

    Interneurons exhibiting centre--surround antagonism within their receptive fields are commonly found in peripheral visual pathways. We propose that this organization enables the visual system to encode spatial detail in a manner that minimizes the deleterious effects of intrinsic noise, by exploiting the spatial correlation that exists within natural scenes. The antagonistic surround takes a weighted mean of the signals in neighbouring receptors to generate a statistical prediction of the signal at the centre. The predicted value is subtracted from the actual centre signal, thus minimizing the range of outputs transmitted by the centre. In this way the entire dynamic range of the interneuron can be devoted to encoding a small range of intensities, thus rendering fine detail detectable against intrinsic noise injected at later stages in processing. This predictive encoding scheme also reduces spatial redundancy, thereby enabling the array of interneurons to transmit a larger number of distinguishable images, taking into account the expected structure of the visual world. The profile of the required inhibitory field is derived from statistical estimation theory. This profile depends strongly upon the signal: noise ratio and weakly upon the extent of lateral spatial correlation. The receptive fields that are quantitatively predicted by the theory resemble those of X-type retinal ganglion cells and show that the inhibitory surround should become weaker and more diffuse at low intensities. The latter property is unequivocally demonstrated in the first-order interneurons of the fly's compound eye. The theory is extended to the time domain to account for the phasic responses of fly interneurons. These comparisons suggest that, in the early stages of processing, the visual system is concerned primarily with coding the visual image to protect against subsequent intrinsic noise, rather than with reconstructing the scene or extracting specific features from it. The treatment emphasizes that a neuron's dynamic range should be matched to both its receptive field and the statistical properties of the visual pattern expected within this field. Finally, the analysis is synthetic because it is an extension of the background suppression hypothesis (Barlow & Levick 1976), satisfies the redundancy reduction hypothesis (Barlow 1961 a, b) and is equivalent to deblurring under certain conditions (Ratliff 1965).

  17. Early-warning signals for catastrophic soil degradation

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek

    2010-05-01

    Many earth systems have critical thresholds at which the system shifts abruptly from one state to another. Such critical transitions have been described, among others, for climate, vegetation, animal populations, and geomorphology. Predicting the timing of critical transitions before they are reached is of importance because of the large impact on nature and society associated with the transition. However, it is notably difficult to predict the timing of a transition. This is because the state variables of the system show little change before the threshold is reached. As a result, the precision of field observations is often too low to provide predictions of the timing of a transition. A possible solution is the use of spatio-temporal patterns in state variables as leading indicators of a transition. It is becoming clear that the critically slowing down of a system causes spatio-temporal autocorrelation and variance to increase before the transition. Thus, spatio-temporal patterns are important candidates for early-warning signals. In this research we will show that these early-warning signals also exist in geomorphological systems. We consider a modelled vegetation-soil system under a gradually increasing grazing pressure causing an abrupt shift towards extensive soil degradation. It is shown that changes in spatio-temporal patterns occur well ahead of this catastrophic transition. A distributed model describing the coupled processes of vegetation growth and geomorphological denudation is adapted. The model uses well-studied simple process representations for vegetation and geomorphology. A logistic growth model calculates vegetation cover as a function of grazing pressure and vegetation growth rate. Evolution of the soil thickness is modelled by soil creep and wash processes, as a function of net rain reaching the surface. The vegetation and soil system are coupled by 1) decreasing vegetation growth with decreasing soil thickness and 2) increasing soil wash with decreasing vegetation cover. The model describes a critical, catastrophic transition of an underexploited system with low grazing pressure towards an overexploited system. The underexploited state has high vegetation cover and well developed soils, while the overexploited state has low vegetation cover and largely degraded soils. We first show why spatio-temporal patterns in vegetation cover, morphology, erosion rate, and sediment load should be expected to change well before the critical transition towards the overexploited state. Subsequently, spatio-temporal patterns are quantified by calculating statistics, in particular first order statistics and autocorrelation in space and time. It is shown that these statistics gradually change before the transition is reached. This indicates that the statistics may serve as early-warning signals in real-world applications. We also discuss the potential use of remote sensing to predict the critical transition in real-world landscapes.

  18. Significance of noisy signals in periodograms

    NASA Astrophysics Data System (ADS)

    Süveges, Maria

    2015-08-01

    The detection of tiny periodic signals in noisy and irregularly sampled time series is a challenging task. Once a small peak is found in the periodogram, the next step is to see how probable it is that pure noise produced a peak so extreme - that is to say, compute its False Alarm Probability (FAP). This useful measure quantifies the statistical plausibility of the found signal among the noise. However, its derivation from statistical principles is very hard due to the specificities of astronomical periodograms, such as oversampling and the ensuing strong correlation among its values at different frequencies. I will present a method to compute the FAP based on extreme-value statistics (Süveges 2014), and compare it to two other methods proposed by Baluev (2008) and Paltani (2004) and Schwarzenberg-Czerny (2012) on signals with various signal shapes and at different signal-to-noise ratios.

  19. An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition

    NASA Astrophysics Data System (ADS)

    Mousavi, S. M.; Langston, C. A.

    2016-12-01

    Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http://dx.doi.org/10.1016/j.jappgeo.2016.06.008.

  20. Dynamic Encoding of Speech Sequence Probability in Human Temporal Cortex

    PubMed Central

    Leonard, Matthew K.; Bouchard, Kristofer E.; Tang, Claire

    2015-01-01

    Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. PMID:25948269

  1. SU-E-J-261: Statistical Analysis and Chaotic Dynamics of Respiratory Signal of Patients in BodyFix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michalski, D; Huq, M; Bednarz, G

    Purpose: To quantify respiratory signal of patients in BodyFix undergoing 4DCT scan with and without immobilization cover. Methods: 20 pairs of respiratory tracks recorded with RPM system during 4DCT scan were analyzed. Descriptive statistic was applied to selected parameters of exhale-inhale decomposition. Standardized signals were used with the delay method to build orbits in embedded space. Nonlinear behavior was tested with surrogate data. Sample entropy SE, Lempel-Ziv complexity LZC and the largest Lyapunov exponents LLE were compared. Results: Statistical tests show difference between scans for inspiration time and its variability, which is bigger for scans without cover. The same ismore » for variability of the end of exhalation and inhalation. Other parameters fail to show the difference. For both scans respiratory signals show determinism and nonlinear stationarity. Statistical test on surrogate data reveals their nonlinearity. LLEs show signals chaotic nature and its correlation with breathing period and its embedding delay time. SE, LZC and LLE measure respiratory signal complexity. Nonlinear characteristics do not differ between scans. Conclusion: Contrary to expectation cover applied to patients in BodyFix appears to have limited effect on signal parameters. Analysis based on trajectories of delay vectors shows respiratory system nonlinear character and its sensitive dependence on initial conditions. Reproducibility of respiratory signal can be evaluated with measures of signal complexity and its predictability window. Longer respiratory period is conducive for signal reproducibility as shown by these gauges. Statistical independence of the exhale and inhale times is also supported by the magnitude of LLE. The nonlinear parameters seem more appropriate to gauge respiratory signal complexity since its deterministic chaotic nature. It contrasts with measures based on harmonic analysis that are blind for nonlinear features. Dynamics of breathing, so crucial for 4D-based clinical technologies, can be better controlled if nonlinear-based methodology, which reflects respiration characteristic, is applied. Funding provided by Varian Medical Systems via Investigator Initiated Research Project.« less

  2. Analytical estimation of laser phase noise induced BER floor in coherent receiver with digital signal processing.

    PubMed

    Vanin, Evgeny; Jacobsen, Gunnar

    2010-03-01

    The Bit-Error-Ratio (BER) floor caused by the laser phase noise in the optical fiber communication system with differential quadrature phase shift keying (DQPSK) and coherent detection followed by digital signal processing (DSP) is analytically evaluated. An in-phase and quadrature (I&Q) receiver with a carrier phase recovery using DSP is considered. The carrier phase recovery is based on a phase estimation of a finite sum (block) of the signal samples raised to the power of four and the phase unwrapping at transitions between blocks. It is demonstrated that errors generated at block transitions cause the dominating contribution to the system BER floor when the impact of the additive noise is negligibly small in comparison with the effect of the laser phase noise. Even the BER floor in the case when the phase unwrapping is omitted is analytically derived and applied to emphasize the crucial importance of this signal processing operation. The analytical results are verified by full Monte Carlo simulations. The BER for another type of DQPSK receiver operation, which is based on differential phase detection, is also obtained in the analytical form using the principle of conditional probability. The principle of conditional probability is justified in the case of differential phase detection due to statistical independency of the laser phase noise induced signal phase error and the additive noise contributions. Based on the achieved analytical results the laser linewidth tolerance is calculated for different system cases.

  3. Perception of second- and third-order orientation signals and their interactions

    PubMed Central

    Victor, Jonathan D.; Thengone, Daniel J.; Conte, Mary M.

    2013-01-01

    Orientation signals, which are crucial to many aspects of visual function, are more complex and varied in the natural world than in the stimuli typically used for laboratory investigation. Gratings and lines have a single orientation, but in natural stimuli, local features have multiple orientations, and multiple orientations can occur even at the same location. Moreover, orientation cues can arise not only from pairwise spatial correlations, but from higher-order ones as well. To investigate these orientation cues and how they interact, we examined segmentation performance for visual textures in which the strengths of different kinds of orientation cues were varied independently, while controlling potential confounds such as differences in luminance statistics. Second-order cues (the kind present in gratings) at different orientations are largely processed independently: There is no cancellation of positive and negative signals at orientations that differ by 45°. Third-order orientation cues are readily detected and interact only minimally with second-order cues. However, they combine across orientations in a different way: Positive and negative signals largely cancel if the orientations differ by 90°. Two additional elements are superimposed on this picture. First, corners play a special role. When second-order orientation cues combine to produce corners, they provide a stronger signal for texture segregation than can be accounted for by their individual effects. Second, while the object versus background distinction does not influence processing of second-order orientation cues, this distinction influences the processing of third-order orientation cues. PMID:23532909

  4. An Ultra-Wideband Cross-Correlation Radiometer for Mesoscopic Experiments

    NASA Astrophysics Data System (ADS)

    Toonen, Ryan; Haselby, Cyrus; Qin, Hua; Eriksson, Mark; Blick, Robert

    2007-03-01

    We have designed, built and tested a cross-correlation radiometer for detecting statistical order in the quantum fluctuations of mesoscopic experiments at sub-Kelvin temperatures. Our system utilizes a fully analog front-end--operating over the X- and Ku-bands (8 to 18 GHz)--for computing the cross-correlation function. Digital signal processing techniques are used to provide robustness against instrumentation drifts and offsets. The economized version of our instrument can measure, with sufficient correlation efficiency, noise signals having power levels as low as 10 fW. We show that, if desired, we can improve this performance by including cryogenic preamplifiers which boost the signal-to-noise ratio near the signal source. By adding a few extra components, we can measure both the real and imaginary parts of the cross-correlation function--improving the overall signal-to-noise ratio by a factor of sqrt[2]. We demonstrate the utility of our cross-correlator with noise power measurements from a quantum point contact.

  5. Analysis of cerebral vessels dynamics using experimental data with missed segments

    NASA Astrophysics Data System (ADS)

    Pavlova, O. N.; Abdurashitov, A. S.; Ulanova, M. V.; Shihalov, G. M.; Semyachkina-Glushkovskaya, O. V.; Pavlov, A. N.

    2018-04-01

    Physiological signals often contain various bad segments that occur due to artifacts, failures of the recording equipment or varying experimental conditions. The related experimental data need to be preprocessed to avoid such parts of recordings. In the case of few bad segments, they can simply be removed from the signal and its analysis is further performed. However, when there are many extracted segments, the internal structure of the analyzed physiological process may be destroyed, and it is unclear whether such signal can be used in diagnostic-related studies. In this paper we address this problem for the case of cerebral vessels dynamics. We perform analysis of simulated data in order to reveal general features of quantifying scaling features of complex signals with distinct correlation properties and show that the effects of data loss are significantly different for experimental data with long-range correlations and anti-correlations. We conclude that the cerebral vessels dynamics is significantly less sensitive to missed data fragments as compared with signals with anti-correlated statistics.

  6. An experimental investigation of the power spectrum of phase modulation induced on a satellite radio signal by the ionosphere

    NASA Technical Reports Server (NTRS)

    Moser, D. T.

    1972-01-01

    The power spectrum of phase modulation imposed upon satellite radio signals by the inhomogeneous F-region of the ionosphere (100 - 500 km) was studied. Tapes of the S-66 Beacon B Satellite recorded during the period 1964 - 1966 were processed to yield or record the frequency of modulation induced on the signals by ionospheric dispersion. This modulation is produced from the sweeping across the receiving station as the satellite transits of the two dimensional spatial phase pattern are produced on the ground. From this a power spectrum of structure sizes comprising the diffracting mechanism was determined using digital techniques. Fresnel oscillations were observed and analyzed along with some comments on the statistical stationarity of the shape of the power spectrum observed.

  7. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  8. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  9. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  10. Fusion of Local Statistical Parameters for Buried Underwater Mine Detection in Sonar Imaging

    NASA Astrophysics Data System (ADS)

    Maussang, F.; Rombaut, M.; Chanussot, J.; Hétet, A.; Amate, M.

    2008-12-01

    Detection of buried underwater objects, and especially mines, is a current crucial strategic task. Images provided by sonar systems allowing to penetrate in the sea floor, such as the synthetic aperture sonars (SASs), are of great interest for the detection and classification of such objects. However, the signal-to-noise ratio is fairly low and advanced information processing is required for a correct and reliable detection of the echoes generated by the objects. The detection method proposed in this paper is based on a data-fusion architecture using the belief theory. The input data of this architecture are local statistical characteristics extracted from SAS data corresponding to the first-, second-, third-, and fourth-order statistical properties of the sonar images, respectively. The interest of these parameters is derived from a statistical model of the sonar data. Numerical criteria are also proposed to estimate the detection performances and to validate the method.

  11. Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is.

  12. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    NASA Astrophysics Data System (ADS)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  13. Minimal Polynomial Method for Estimating Parameters of Signals Received by an Antenna Array

    NASA Astrophysics Data System (ADS)

    Ermolaev, V. T.; Flaksman, A. G.; Elokhin, A. V.; Kuptsov, V. V.

    2018-01-01

    The effectiveness of the projection minimal polynomial method for solving the problem of determining the number of sources of signals acting on an antenna array (AA) with an arbitrary configuration and their angular directions has been studied. The method proposes estimating the degree of the minimal polynomial of the correlation matrix (CM) of the input process in the AA on the basis of a statistically validated root-mean-square criterion. Special attention is paid to the case of the ultrashort sample of the input process when the number of samples is considerably smaller than the number of AA elements, which is important for multielement AAs. It is shown that the proposed method is more effective in this case than methods based on the AIC (Akaike's Information Criterion) or minimum description length (MDL) criterion.

  14. Tracking signal test to monitor an intelligent time series forecasting model

    NASA Astrophysics Data System (ADS)

    Deng, Yan; Jaraiedi, Majid; Iskander, Wafik H.

    2004-03-01

    Extensive research has been conducted on the subject of Intelligent Time Series forecasting, including many variations on the use of neural networks. However, investigation of model adequacy over time, after the training processes is completed, remains to be fully explored. In this paper we demonstrate a how a smoothed error tracking signals test can be incorporated into a neuro-fuzzy model to monitor the forecasting process and as a statistical measure for keeping the forecasting model up-to-date. The proposed monitoring procedure is effective in the detection of nonrandom changes, due to model inadequacy or lack of unbiasedness in the estimation of model parameters and deviations from the existing patterns. This powerful detection device will result in improved forecast accuracy in the long run. An example data set has been used to demonstrate the application of the proposed method.

  15. Cassini finds molecular hydrogen in the Enceladus plume: Evidence for hydrothermal processes.

    PubMed

    Waite, J Hunter; Glein, Christopher R; Perryman, Rebecca S; Teolis, Ben D; Magee, Brian A; Miller, Greg; Grimes, Jacob; Perry, Mark E; Miller, Kelly E; Bouquet, Alexis; Lunine, Jonathan I; Brockwell, Tim; Bolton, Scott J

    2017-04-14

    Saturn's moon Enceladus has an ice-covered ocean; a plume of material erupts from cracks in the ice. The plume contains chemical signatures of water-rock interaction between the ocean and a rocky core. We used the Ion Neutral Mass Spectrometer onboard the Cassini spacecraft to detect molecular hydrogen in the plume. By using the instrument's open-source mode, background processes of hydrogen production in the instrument were minimized and quantified, enabling the identification of a statistically significant signal of hydrogen native to Enceladus. We find that the most plausible source of this hydrogen is ongoing hydrothermal reactions of rock containing reduced minerals and organic materials. The relatively high hydrogen abundance in the plume signals thermodynamic disequilibrium that favors the formation of methane from CO 2 in Enceladus' ocean. Copyright © 2017, American Association for the Advancement of Science.

  16. Wireless Monitoring of the Height of Condensed Water in Steam Pipes

    NASA Technical Reports Server (NTRS)

    Lee, Hyeong Jae; Bar-Cohen, Yoseph; Lih, Shyh-Shiuh; Badescu, Mircea; Dingizian, Arsham; Takano, Nobuyuki; Blosiu, Julian O.

    2014-01-01

    A wireless health monitoring system has been developed for determining the height of water condensation in the steam pipes and the data acquisition is done remotely using a wireless network system. The developed system is designed to operate in the harsh environment encountered at manholes and the pipe high temperature of over 200 °C. The test method is an ultrasonic pulse-echo and the hardware includes a pulser, receiver and wireless modem for communication. Data acquisition and signal processing software were developed to determine the water height using adaptive signal processing and data communication that can be controlled while the hardware is installed in a manhole. A statistical decision-making tool is being developed based on the field test data to determine the height of in the condensed water under high noise conditions and other environmental factors.

  17. Model-based ultrasound temperature visualization during and following HIFU exposure.

    PubMed

    Ye, Guoliang; Smith, Penny Probert; Noble, J Alison

    2010-02-01

    This paper describes the application of signal processing techniques to improve the robustness of ultrasound feedback for displaying changes in temperature distribution in treatment using high-intensity focused ultrasound (HIFU), especially at the low signal-to-noise ratios that might be expected in in vivo abdominal treatment. Temperature estimation is based on the local displacements in ultrasound images taken during HIFU treatment, and a method to improve robustness to outliers is introduced. The main contribution of the paper is in the application of a Kalman filter, a statistical signal processing technique, which uses a simple analytical temperature model of heat dispersion to improve the temperature estimation from the ultrasound measurements during and after HIFU exposure. To reduce the sensitivity of the method to previous assumptions on the material homogeneity and signal-to-noise ratio, an adaptive form is introduced. The method is illustrated using data from HIFU exposure of ex vivo bovine liver. A particular advantage of the stability it introduces is that the temperature can be visualized not only in the intervals between HIFU exposure but also, for some configurations, during the exposure itself. 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  18. Laboratory for Engineering Man/Machine Systems (LEMS): System identification, model reduction and deconvolution filtering using Fourier based modulating signals and high order statistics

    NASA Technical Reports Server (NTRS)

    Pan, Jianqiang

    1992-01-01

    Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.

  19. Duplicate retention in signalling proteins and constraints from network dynamics.

    PubMed

    Soyer, O S; Creevey, C J

    2010-11-01

    Duplications are a major driving force behind evolution. Most duplicates are believed to fix through genetic drift, but it is not clear whether this process affects all duplications equally or whether there are certain gene families that are expected to show neutral expansions under certain circumstances. Here, we analyse the neutrality of duplications in different functional classes of signalling proteins based on their effects on response dynamics. We find that duplications involving intermediary proteins in a signalling network are neutral more often than those involving receptors. Although the fraction of neutral duplications in all functional classes increase with decreasing population size and selective pressure on dynamics, this effect is most pronounced for receptors, indicating a possible expansion of receptors in species with small population size. In line with such an expectation, we found a statistically significant increase in the number of receptors as a fraction of genome size in eukaryotes compared with prokaryotes. Although not confirmative, these results indicate that neutral processes can be a significant factor in shaping signalling networks and affect proteins from different functional classes differently. © 2010 The Authors. Journal Compilation © 2010 European Society For Evolutionary Biology.

  20. Engineering studies related to geodetic and oceanographic remote sensing using short pulse techniques

    NASA Technical Reports Server (NTRS)

    Miller, L. S.; Brown, G. S.; Hayne, G. S.

    1973-01-01

    For the Skylab S-193 radar altimeter, data processing flow charts and identification of calibration requirements and problem areas for defined S-193 altimeter experiments are presented. An analysis and simulation of the relationship between one particular S-193 measurement and the parameter of interest for determining the sea surface scattering cross-section are considered. For the GEOS-C radar altimeter, results are presented for system analyses pertaining to signal-to-noise ratio, pulse compression threshold behavior, altimeter measurement variance characteristics, desirability of onboard averaging, tracker bandwidth considerations, and statistical character of the altimeter data in relation to harmonic analysis properties of the geodetic signal.

  1. Large signal design - Performance and simulation of a 3 W C-band GaAs power MMIC

    NASA Astrophysics Data System (ADS)

    White, Paul M.; Hendrickson, Mary A.; Chang, Wayne H.; Curtice, Walter R.

    1990-04-01

    This paper describes a C-band GaAs power MMIC amplifier that achieved a gain of 17 dB and 1 dB compressed CW power output of 34 dBm across a 4.5-6.25-GHz frequency range, without design iteration. The first-pass design success was achieved due to the application of a harmonic balance simulator to define the optimum output load, using a large-signal FET model determined statistically on a well controlled foundry-ready process line. The measured performance was close to that predicted by a full harmonic balance circuit analysis.

  2. Transiting Planet Search in the Kepler Pipeline

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.; Chandrasekaran, Hema; McCauliff, Sean D.; Caldwell, Douglas A.; Tenebaum, Peter; Li, Jie; Klaus, Todd C.; Cote, Mile T.; Middour, Christopher

    2010-01-01

    The Kepler Mission simultaneously measures the brightness of more than 160,000 stars every 29.4 minutes over a 3.5-year mission to search for transiting planets. Detecting transits is a signal-detection problem where the signal of interest is a periodic pulse train and the predominant noise source is non-white, non-stationary (1/f) type process of stellar variability. Many stars also exhibit coherent or quasi-coherent oscillations. The detection algorithm first identifies and removes strong oscillations followed by an adaptive, wavelet-based matched filter. We discuss how we obtain super-resolution detection statistics and the effectiveness of the algorithm for Kepler flight data.

  3. Comparative intelligibility investigation of single-channel noise-reduction algorithms for Chinese, Japanese, and English.

    PubMed

    Li, Junfeng; Yang, Lin; Zhang, Jianping; Yan, Yonghong; Hu, Yi; Akagi, Masato; Loizou, Philipos C

    2011-05-01

    A large number of single-channel noise-reduction algorithms have been proposed based largely on mathematical principles. Most of these algorithms, however, have been evaluated with English speech. Given the different perceptual cues used by native listeners of different languages including tonal languages, it is of interest to examine whether there are any language effects when the same noise-reduction algorithm is used to process noisy speech in different languages. A comparative evaluation and investigation is taken in this study of various single-channel noise-reduction algorithms applied to noisy speech taken from three languages: Chinese, Japanese, and English. Clean speech signals (Chinese words and Japanese words) were first corrupted by three types of noise at two signal-to-noise ratios and then processed by five single-channel noise-reduction algorithms. The processed signals were finally presented to normal-hearing listeners for recognition. Intelligibility evaluation showed that the majority of noise-reduction algorithms did not improve speech intelligibility. Consistent with a previous study with the English language, the Wiener filtering algorithm produced small, but statistically significant, improvements in intelligibility for car and white noise conditions. Significant differences between the performances of noise-reduction algorithms across the three languages were observed.

  4. Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun

    2014-02-01

    We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.

  5. CHAM: weak signals detection through a new multivariate algorithm for process control

    NASA Astrophysics Data System (ADS)

    Bergeret, François; Soual, Carole; Le Gratiet, B.

    2016-10-01

    Derivatives technologies based on core CMOS processes are significantly aggressive in term of design rules and process control requirements. Process control plan is a derived from Process Assumption (PA) calculations which result in a design rule based on known process variability capabilities, taking into account enough margin to be safe not only for yield but especially for reliability. Even though process assumptions are calculated with a 4 sigma known process capability margin, efficient and competitive designs are challenging the process especially for derivatives technologies in 40 and 28nm nodes. For wafer fab process control, PA are declined in monovariate (layer1 CD, layer2 CD, layer2 to layer1 overlay, layer3 CD etc….) control charts with appropriated specifications and control limits which all together are securing the silicon. This is so far working fine but such system is not really sensitive to weak signals coming from interactions of multiple key parameters (high layer2 CD combined with high layer3 CD as an example). CHAM is a software using an advanced statistical algorithm specifically designed to detect small signals, especially when there are many parameters to control and when the parameters can interact to create yield issues. In this presentation we will first present the CHAM algorithm, then the case-study on critical dimensions, with the results, and we will conclude on future work. This partnership between Ippon and STM is part of E450LMDAP, European project dedicated to metrology and lithography development for future technology nodes, especially 10nm.

  6. Photoresist thin-film effects on alignment process capability

    NASA Astrophysics Data System (ADS)

    Flores, Gary E.; Flack, Warren W.

    1993-08-01

    Two photoresists were selected for alignment characterization based on their dissimilar coating properties and observed differences on alignment capability. The materials are Dynachem OFPR-800 and Shipley System 8. Both photoresists were examined on two challenging alignment levels in a submicron CMOS process, a nitride level and a planarized second level metal. An Ultratech Stepper model 1500 which features a darkfield alignment system with a broadband green light for alignment signal detection was used for this project. Initially, statistically designed linear screening experiments were performed to examine six process factors for each photoresist: viscosity, spin acceleration, spin speed, spin time, softbake time, and softbake temperature. Using the results derived from the screening experiments, a more thorough examination of the statistically significant process factors was performed. A full quadratic experimental design was conducted to examine viscosity, spin speed, and spin time coating properties on alignment. This included a characterization of both intra and inter wafer alignment control and alignment process capability. Insight to the different alignment behavior is analyzed in terms of photoresist material properties and the physical nature of the alignment detection system.

  7. Information processing in bacteria: memory, computation, and statistical physics: a key issues review

    NASA Astrophysics Data System (ADS)

    Lan, Ganhui; Tu, Yuhai

    2016-05-01

    Living systems have to constantly sense their external environment and adjust their internal state in order to survive and reproduce. Biological systems, from as complex as the brain to a single E. coli cell, have to process these data in order to make appropriate decisions. How do biological systems sense external signals? How do they process the information? How do they respond to signals? Through years of intense study by biologists, many key molecular players and their interactions have been identified in different biological machineries that carry out these signaling functions. However, an integrated, quantitative understanding of the whole system is still lacking for most cellular signaling pathways, not to say the more complicated neural circuits. To study signaling processes in biology, the key thing to measure is the input-output relationship. The input is the signal itself, such as chemical concentration, external temperature, light (intensity and frequency), and more complex signals such as the face of a cat. The output can be protein conformational changes and covalent modifications (phosphorylation, methylation, etc), gene expression, cell growth and motility, as well as more complex output such as neuron firing patterns and behaviors of higher animals. Due to the inherent noise in biological systems, the measured input-output dependence is often noisy. These noisy data can be analysed by using powerful tools and concepts from information theory such as mutual information, channel capacity, and the maximum entropy hypothesis. This information theory approach has been successfully used to reveal the underlying correlations between key components of biological networks, to set bounds for network performance, and to understand possible network architecture in generating observed correlations. Although the information theory approach provides a general tool in analysing noisy biological data and may be used to suggest possible network architectures in preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network—the main players (nodes) and their interactions (links)—in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also study the thermodynamic costs of adaptation for cells to maintain an accurate memory. The statistical physics based approach described here should be useful in understanding design principles for cellular biochemical circuits in general.

  8. Information processing in bacteria: memory, computation, and statistical physics: a key issues review.

    PubMed

    Lan, Ganhui; Tu, Yuhai

    2016-05-01

    Living systems have to constantly sense their external environment and adjust their internal state in order to survive and reproduce. Biological systems, from as complex as the brain to a single E. coli cell, have to process these data in order to make appropriate decisions. How do biological systems sense external signals? How do they process the information? How do they respond to signals? Through years of intense study by biologists, many key molecular players and their interactions have been identified in different biological machineries that carry out these signaling functions. However, an integrated, quantitative understanding of the whole system is still lacking for most cellular signaling pathways, not to say the more complicated neural circuits. To study signaling processes in biology, the key thing to measure is the input-output relationship. The input is the signal itself, such as chemical concentration, external temperature, light (intensity and frequency), and more complex signals such as the face of a cat. The output can be protein conformational changes and covalent modifications (phosphorylation, methylation, etc), gene expression, cell growth and motility, as well as more complex output such as neuron firing patterns and behaviors of higher animals. Due to the inherent noise in biological systems, the measured input-output dependence is often noisy. These noisy data can be analysed by using powerful tools and concepts from information theory such as mutual information, channel capacity, and the maximum entropy hypothesis. This information theory approach has been successfully used to reveal the underlying correlations between key components of biological networks, to set bounds for network performance, and to understand possible network architecture in generating observed correlations. Although the information theory approach provides a general tool in analysing noisy biological data and may be used to suggest possible network architectures in preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network-the main players (nodes) and their interactions (links)-in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also study the thermodynamic costs of adaptation for cells to maintain an accurate memory. The statistical physics based approach described here should be useful in understanding design principles for cellular biochemical circuits in general.

  9. Clustering, randomness and regularity in cloud fields. I - Theoretical considerations. II - Cumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Weger, R. C.; Lee, J.; Zhu, Tianri; Welch, R. M.

    1992-01-01

    The current controversy existing in reference to the regularity vs. clustering in cloud fields is examined by means of analysis and simulation studies based upon nearest-neighbor cumulative distribution statistics. It is shown that the Poisson representation of random point processes is superior to pseudorandom-number-generated models and that pseudorandom-number-generated models bias the observed nearest-neighbor statistics towards regularity. Interpretation of this nearest-neighbor statistics is discussed for many cases of superpositions of clustering, randomness, and regularity. A detailed analysis is carried out of cumulus cloud field spatial distributions based upon Landsat, AVHRR, and Skylab data, showing that, when both large and small clouds are included in the cloud field distributions, the cloud field always has a strong clustering signal.

  10. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    PubMed

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  11. Detecting determinism with improved sensitivity in time series: Rank-based nonlinear predictability score

    NASA Astrophysics Data System (ADS)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G.

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  12. Forecasting infectious disease emergence subject to seasonal forcing.

    PubMed

    Miller, Paige B; O'Dea, Eamon B; Rohani, Pejman; Drake, John M

    2017-09-06

    Despite high vaccination coverage, many childhood infections pose a growing threat to human populations. Accurate disease forecasting would be of tremendous value to public health. Forecasting disease emergence using early warning signals (EWS) is possible in non-seasonal models of infectious diseases. Here, we assessed whether EWS also anticipate disease emergence in seasonal models. We simulated the dynamics of an immunizing infectious pathogen approaching the tipping point to disease endemicity. To explore the effect of seasonality on the reliability of early warning statistics, we varied the amplitude of fluctuations around the average transmission. We proposed and analyzed two new early warning signals based on the wavelet spectrum. We measured the reliability of the early warning signals depending on the strength of their trend preceding the tipping point and then calculated the Area Under the Curve (AUC) statistic. Early warning signals were reliable when disease transmission was subject to seasonal forcing. Wavelet-based early warning signals were as reliable as other conventional early warning signals. We found that removing seasonal trends, prior to analysis, did not improve early warning statistics uniformly. Early warning signals anticipate the onset of critical transitions for infectious diseases which are subject to seasonal forcing. Wavelet-based early warning statistics can also be used to forecast infectious disease.

  13. Statistics of Natural Communication Signals Observed in the Wild Identify Important Yet Neglected Stimulus Regimes in Weakly Electric Fish.

    PubMed

    Henninger, Jörg; Krahe, Rüdiger; Kirschbaum, Frank; Grewe, Jan; Benda, Jan

    2018-06-13

    Sensory systems evolve in the ecological niches that each species is occupying. Accordingly, encoding of natural stimuli by sensory neurons is expected to be adapted to the statistics of these stimuli. For a direct quantification of sensory scenes, we tracked natural communication behavior of male and female weakly electric fish, Apteronotus rostratus , in their Neotropical rainforest habitat with high spatiotemporal resolution over several days. In the context of courtship, we observed large quantities of electrocommunication signals. Echo responses, acknowledgment signals, and their synchronizing role in spawning demonstrated the behavioral relevance of these signals. In both courtship and aggressive contexts, we observed robust behavioral responses in stimulus regimes that have so far been neglected in electrophysiological studies of this well characterized sensory system and that are well beyond the range of known best frequency and amplitude tuning of the electroreceptor afferents' firing rate modulation. Our results emphasize the importance of quantifying sensory scenes derived from freely behaving animals in their natural habitats for understanding the function and evolution of neural systems. SIGNIFICANCE STATEMENT The processing mechanisms of sensory systems have evolved in the context of the natural lives of organisms. To understand the functioning of sensory systems therefore requires probing them in the stimulus regimes in which they evolved. We took advantage of the continuously generated electric fields of weakly electric fish to explore electrosensory stimulus statistics in their natural Neotropical habitat. Unexpectedly, many of the electrocommunication signals recorded during courtship, spawning, and aggression had much smaller amplitudes or higher frequencies than stimuli used so far in neurophysiological characterizations of the electrosensory system. Our results demonstrate that quantifying sensory scenes derived from freely behaving animals in their natural habitats is essential to avoid biases in the choice of stimuli used to probe brain function. Copyright © 2018 the authors 0270-6474/18/385456-11$15.00/0.

  14. A statistical rain attenuation prediction model with application to the advanced communication technology satellite project. 3: A stochastic rain fade control algorithm for satellite link power via non linear Markow filtering theory

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1991-01-01

    The dynamic and composite nature of propagation impairments that are incurred on Earth-space communications links at frequencies in and above 30/20 GHz Ka band, i.e., rain attenuation, cloud and/or clear air scintillation, etc., combined with the need to counter such degradations after the small link margins have been exceeded, necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) Project by the implementation of optimal processing schemes derived through the use of the Rain Attenuation Prediction Model and nonlinear Markov filtering theory.

  15. Stimulated Electronic X-Ray Raman Scattering

    NASA Astrophysics Data System (ADS)

    Weninger, Clemens; Purvis, Michael; Ryan, Duncan; London, Richard A.; Bozek, John D.; Bostedt, Christoph; Graf, Alexander; Brown, Gregory; Rocca, Jorge J.; Rohringer, Nina

    2013-12-01

    We demonstrate strong stimulated inelastic x-ray scattering by resonantly exciting a dense gas target of neon with femtosecond, high-intensity x-ray pulses from an x-ray free-electron laser (XFEL). A small number of lower energy XFEL seed photons drive an avalanche of stimulated resonant inelastic x-ray scattering processes that amplify the Raman scattering signal by several orders of magnitude until it reaches saturation. Despite the large overall spectral width, the internal spiky structure of the XFEL spectrum determines the energy resolution of the scattering process in a statistical sense. This is demonstrated by observing a stochastic line shift of the inelastically scattered x-ray radiation. In conjunction with statistical methods, XFELs can be used for stimulated resonant inelastic x-ray scattering, with spectral resolution smaller than the natural width of the core-excited, intermediate state.

  16. Statistical study of generalized nonlinear phase step estimation methods in phase-shifting interferometry.

    PubMed

    Langoju, Rajesh; Patil, Abhijit; Rastogi, Pramod

    2007-11-20

    Signal processing methods based on maximum-likelihood theory, discrete chirp Fourier transform, and spectral estimation methods have enabled accurate measurement of phase in phase-shifting interferometry in the presence of nonlinear response of the piezoelectric transducer to the applied voltage. We present the statistical study of these generalized nonlinear phase step estimation methods to identify the best method by deriving the Cramér-Rao bound. We also address important aspects of these methods for implementation in practical applications and compare the performance of the best-identified method with other bench marking algorithms in the presence of harmonics and noise.

  17. Modeling the complexity of acoustic emission during intermittent plastic deformation: Power laws and multifractal spectra

    NASA Astrophysics Data System (ADS)

    Kumar, Jagadish; Ananthakrishna, G.

    2018-01-01

    Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.

  18. Modeling the complexity of acoustic emission during intermittent plastic deformation: Power laws and multifractal spectra.

    PubMed

    Kumar, Jagadish; Ananthakrishna, G

    2018-01-01

    Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.

  19. Application of pattern recognition techniques to acousto-ultrasonic testing of Kevlar composite panels

    NASA Astrophysics Data System (ADS)

    Hinton, Yolanda L.

    An acousto-ultrasonic evaluation of panels fabricated from woven Kevlar and PVB/phenolic resin is being conducted. The panels were fabricated with various simulated defects. They were examined by pulsing with one acoustic emission sensor, and detecting the signal with another sensor, on the same side of the panel at a fixed distance. The acoustic emission signals were filtered through high (400-600 KHz), low (100-300 KHz) and wide (100-1200 KHz) bandpass filters. Acoustic emission signal parameters, including amplitude, counts, rise time, duration, 'energy', rms, and counts to peak, were recorded. These were statistically analyzed to determine which of the AE parameters best characterize the simulated defects. The wideband filtered acoustic emission signal was also digitized and recorded for further processing. Seventy-one features of the signals in both the time and frequency domains were calculated and compared to determine which subset of these features uniquely characterize the defects in the panels. The objective of the program is to develop a database of AE signal parameters and features to be used in pattern recognition as an inspection tool for material fabricated from these materials.

  20. mvp - an open-source preprocessor for cleaning duplicate records and missing values in mass spectrometry data.

    PubMed

    Lee, Geunho; Lee, Hyun Beom; Jung, Byung Hwa; Nam, Hojung

    2017-07-01

    Mass spectrometry (MS) data are used to analyze biological phenomena based on chemical species. However, these data often contain unexpected duplicate records and missing values due to technical or biological factors. These 'dirty data' problems increase the difficulty of performing MS analyses because they lead to performance degradation when statistical or machine-learning tests are applied to the data. Thus, we have developed missing values preprocessor (mvp), an open-source software for preprocessing data that might include duplicate records and missing values. mvp uses the property of MS data in which identical chemical species present the same or similar values for key identifiers, such as the mass-to-charge ratio and intensity signal, and forms cliques via graph theory to process dirty data. We evaluated the validity of the mvp process via quantitative and qualitative analyses and compared the results from a statistical test that analyzed the original and mvp-applied data. This analysis showed that using mvp reduces problems associated with duplicate records and missing values. We also examined the effects of using unprocessed data in statistical tests and examined the improved statistical test results obtained with data preprocessed using mvp.

  1. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  2. DOA-informed source extraction in the presence of competing talkers and background noise

    NASA Astrophysics Data System (ADS)

    Taseska, Maja; Habets, Emanuël A. P.

    2017-12-01

    A desired speech signal in hands-free communication systems is often degraded by noise and interfering speech. Even though the number and locations of the interferers are often unknown in practice, it is justified to assume in certain applications that the direction-of-arrival (DOA) of the desired source is approximately known. Using the known DOA, fixed spatial filters such as the delay-and-sum beamformer can be steered to extract the desired source. However, it is well-known that fixed data-independent spatial filters do not provide sufficient reduction of directional interferers. Instead, the DOA information can be used to estimate the statistics of the desired and the undesired signals and to compute optimal data-dependent spatial filters. One way the DOA is exploited for optimal spatial filtering in the literature, is by designing DOA-based narrowband detectors to determine whether a desired or an undesired signal is dominant at each time-frequency (TF) bin. Subsequently, the statistics of the desired and the undesired signals can be estimated during the TF bins where the respective signal is dominant. In a similar manner, a Gaussian signal model-based detector which does not incorporate DOA information has been used in scenarios where the undesired signal consists of stationary background noise. However, when the undesired signal is non-stationary, resulting for example from interfering speakers, such a Gaussian signal model-based detector is unable to robustly distinguish desired from undesired speech. To this end, we propose a DOA model-based detector to determine the dominant source at each TF bin and estimate the desired and undesired signal statistics. We demonstrate that data-dependent spatial filters that use the statistics estimated by the proposed framework achieve very good undesired signal reduction, even when using only three microphones.

  3. Robust statistical methods for impulse noise suppressing of spread spectrum induced polarization data, with application to a mine site, Gansu province, China

    NASA Astrophysics Data System (ADS)

    Liu, Weiqiang; Chen, Rujun; Cai, Hongzhu; Luo, Weibin

    2016-12-01

    In this paper, we investigated the robust processing of noisy spread spectrum induced polarization (SSIP) data. SSIP is a new frequency domain induced polarization method that transmits pseudo-random m-sequence as source current where m-sequence is a broadband signal. The potential information at multiple frequencies can be obtained through measurement. Removing the noise is a crucial problem for SSIP data processing. Considering that if the ordinary mean stack and digital filter are not capable of reducing the impulse noise effectively in SSIP data processing, the impact of impulse noise will remain in the complex resistivity spectrum that will affect the interpretation of profile anomalies. We implemented a robust statistical method to SSIP data processing. The robust least-squares regression is used to fit and remove the linear trend from the original data before stacking. The robust M estimate is used to stack the data of all periods. The robust smooth filter is used to suppress the residual noise for data after stacking. For robust statistical scheme, the most appropriate influence function and iterative algorithm are chosen by testing the simulated data to suppress the outliers' influence. We tested the benefits of the robust SSIP data processing using examples of SSIP data recorded in a test site beside a mine in Gansu province, China.

  4. How input fluctuations reshape the dynamics of a biological switching system

    NASA Astrophysics Data System (ADS)

    Hu, Bo; Kessler, David A.; Rappel, Wouter-Jan; Levine, Herbert

    2012-12-01

    An important task in quantitative biology is to understand the role of stochasticity in biochemical regulation. Here, as an extension of our recent work [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.148101 107, 148101 (2011)], we study how input fluctuations affect the stochastic dynamics of a simple biological switch. In our model, the on transition rate of the switch is directly regulated by a noisy input signal, which is described as a non-negative mean-reverting diffusion process. This continuous process can be a good approximation of the discrete birth-death process and is much more analytically tractable. Within this setup, we apply the Feynman-Kac theorem to investigate the statistical features of the output switching dynamics. Consistent with our previous findings, the input noise is found to effectively suppress the input-dependent transitions. We show analytically that this effect becomes significant when the input signal fluctuates greatly in amplitude and reverts slowly to its mean.

  5. Statistical properties of color-signal spaces.

    PubMed

    Lenz, Reiner; Bui, Thanh Hai

    2005-05-01

    In applications of principal component analysis (PCA) it has often been observed that the eigenvector with the largest eigenvalue has only nonnegative entries when the vectors of the underlying stochastic process have only nonnegative values. This has been used to show that the coordinate vectors in PCA are all located in a cone. We prove that the nonnegativity of the first eigenvector follows from the Perron-Frobenius (and Krein-Rutman theory). Experiments show also that for stochastic processes with nonnegative signals the mean vector is often very similar to the first eigenvector. This is not true in general, but we first give a heuristical explanation why we can expect such a similarity. We then derive a connection between the dominance of the first eigenvalue and the similarity between the mean and the first eigenvector and show how to check the relative size of the first eigenvalue without actually computing it. In the last part of the paper we discuss the implication of theoretical results for multispectral color processing.

  6. Statistical properties of color-signal spaces

    NASA Astrophysics Data System (ADS)

    Lenz, Reiner; Hai Bui, Thanh

    2005-05-01

    In applications of principal component analysis (PCA) it has often been observed that the eigenvector with the largest eigenvalue has only nonnegative entries when the vectors of the underlying stochastic process have only nonnegative values. This has been used to show that the coordinate vectors in PCA are all located in a cone. We prove that the nonnegativity of the first eigenvector follows from the Perron-Frobenius (and Krein-Rutman theory). Experiments show also that for stochastic processes with nonnegative signals the mean vector is often very similar to the first eigenvector. This is not true in general, but we first give a heuristical explanation why we can expect such a similarity. We then derive a connection between the dominance of the first eigenvalue and the similarity between the mean and the first eigenvector and show how to check the relative size of the first eigenvalue without actually computing it. In the last part of the paper we discuss the implication of theoretical results for multispectral color processing.

  7. A Plastic Temporal Brain Code for Conscious State Generation

    PubMed Central

    Dresp-Langley, Birgitta; Durup, Jean

    2009-01-01

    Consciousness is known to be limited in processing capacity and often described in terms of a unique processing stream across a single dimension: time. In this paper, we discuss a purely temporal pattern code, functionally decoupled from spatial signals, for conscious state generation in the brain. Arguments in favour of such a code include Dehaene et al.'s long-distance reverberation postulate, Ramachandran's remapping hypothesis, evidence for a temporal coherence index and coincidence detectors, and Grossberg's Adaptive Resonance Theory. A time-bin resonance model is developed, where temporal signatures of conscious states are generated on the basis of signal reverberation across large distances in highly plastic neural circuits. The temporal signatures are delivered by neural activity patterns which, beyond a certain statistical threshold, activate, maintain, and terminate a conscious brain state like a bar code would activate, maintain, or inactivate the electronic locks of a safe. Such temporal resonance would reflect a higher level of neural processing, independent from sensorial or perceptual brain mechanisms. PMID:19644552

  8. Introspective Minds: Using ALE Meta-Analyses to Study Commonalities in the Neural Correlates of Emotional Processing, Social & Unconstrained Cognition

    PubMed Central

    Schilbach, Leonhard; Bzdok, Danilo; Timmermans, Bert; Fox, Peter T.; Laird, Angela R.; Vogeley, Kai; Eickhoff, Simon B.

    2012-01-01

    Previous research suggests overlap between brain regions that show task-induced deactivations and those activated during the performance of social-cognitive tasks. Here, we present results of quantitative meta-analyses of neuroimaging studies, which confirm a statistical convergence in the neural correlates of social and resting state cognition. Based on the idea that both social and unconstrained cognition might be characterized by introspective processes, which are also thought to be highly relevant for emotional experiences, a third meta-analysis was performed investigating studies on emotional processing. By using conjunction analyses across all three sets of studies, we can demonstrate significant overlap of task-related signal change in dorso-medial prefrontal and medial parietal cortex, brain regions that have, indeed, recently been linked to introspective abilities. Our findings, therefore, provide evidence for the existence of a core neural network, which shows task-related signal change during socio-emotional tasks and during resting states. PMID:22319593

  9. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  10. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  11. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  12. Wavelet multiresolution complex network for decoding brain fatigued behavior from P300 signals

    NASA Astrophysics Data System (ADS)

    Gao, Zhong-Ke; Wang, Zi-Bo; Yang, Yu-Xuan; Li, Shan; Dang, Wei-Dong; Mao, Xiao-Qian

    2018-09-01

    Brain-computer interface (BCI) enables users to interact with the environment without relying on neural pathways and muscles. P300 based BCI systems have been extensively used to achieve human-machine interaction. However, the appearance of fatigue symptoms during operation process leads to the decline in classification accuracy of P300. Characterizing brain cognitive process underlying normal and fatigue conditions constitutes a problem of vital importance in the field of brain science. We in this paper propose a novel wavelet decomposition based complex network method to efficiently analyze the P300 signals recorded in the image stimulus test based on classical 'Oddball' paradigm. Initially, multichannel EEG signals are decomposed into wavelet coefficient series. Then we construct complex network by treating electrodes as nodes and determining the connections according to the 2-norm distances between wavelet coefficient series. The analysis of topological structure and statistical index indicates that the properties of brain network demonstrate significant distinctions between normal status and fatigue status. More specifically, the brain network reconfiguration in response to the cognitive task in fatigue status is reflected as the enhancement of the small-worldness.

  13. Qualitative modeling of the decision-making process using electrooculography.

    PubMed

    Zargari Marandi, Ramtin; Sabzpoushan, S H

    2015-12-01

    A novel method based on electrooculography (EOG) has been introduced in this work to study the decision-making process. An experiment was designed and implemented wherein subjects were asked to choose between two items from the same category that were presented within a limited time. The EOG and voice signals of the subjects were recorded during the experiment. A calibration task was performed to map the EOG signals to their corresponding gaze positions on the screen by using an artificial neural network. To analyze the data, 16 parameters were extracted from the response time and EOG signals of the subjects. Evaluation and comparison of the parameters, together with subjects' choices, revealed functional information. On the basis of this information, subjects switched their eye gazes between items about three times on average. We also found, according to statistical hypothesis testing-that is, a t test, t(10) = 71.62, SE = 1.25, p < .0001-that the correspondence rate of a subjects' gaze at the moment of selection with the selected item was significant. Ultimately, on the basis of these results, we propose a qualitative choice model for the decision-making task.

  14. Scaling Behavior in Mitochondrial Redox Fluctuations

    PubMed Central

    Ramanujan, V. Krishnan; Biener, Gabriel; Herman, Brian A.

    2006-01-01

    Scale-invariant long-range correlations have been reported in fluctuations of time-series signals originating from diverse processes such as heart beat dynamics, earthquakes, and stock market data. The common denominator of these apparently different processes is a highly nonlinear dynamics with competing forces and distinct feedback species. We report for the first time an experimental evidence for scaling behavior in NAD(P)H signal fluctuations in isolated mitochondria and intact cells isolated from the liver of a young (5-month-old) mouse. Time-series data were collected by two-photon imaging of mitochondrial NAD(P)H fluorescence and signal fluctuations were quantitatively analyzed for statistical correlations by detrended fluctuation analysis and spectral power analysis. Redox [NAD(P)H / NAD(P)+] fluctuations in isolated mitochondria and intact liver cells were found to display nonrandom, long-range correlations. These correlations are interpreted as arising due to the regulatory dynamics operative in Krebs' cycle enzyme network and electron transport chain in the mitochondria. This finding may provide a novel basis for understanding similar regulatory networks that govern the nonequilibrium properties of living cells. PMID:16565066

  15. Quantum neural network-based EEG filtering for a brain-computer interface.

    PubMed

    Gandhi, Vaibhav; Prasad, Girijesh; Coyle, Damien; Behera, Laxmidhar; McGinnity, Thomas Martin

    2014-02-01

    A novel neural information processing architecture inspired by quantum mechanics and incorporating the well-known Schrodinger wave equation is proposed in this paper. The proposed architecture referred to as recurrent quantum neural network (RQNN) can characterize a nonstationary stochastic signal as time-varying wave packets. A robust unsupervised learning algorithm enables the RQNN to effectively capture the statistical behavior of the input signal and facilitates the estimation of signal embedded in noise with unknown characteristics. The results from a number of benchmark tests show that simple signals such as dc, staircase dc, and sinusoidal signals embedded within high noise can be accurately filtered and particle swarm optimization can be employed to select model parameters. The RQNN filtering procedure is applied in a two-class motor imagery-based brain-computer interface where the objective was to filter electroencephalogram (EEG) signals before feature extraction and classification to increase signal separability. A two-step inner-outer fivefold cross-validation approach is utilized to select the algorithm parameters subject-specifically for nine subjects. It is shown that the subject-specific RQNN EEG filtering significantly improves brain-computer interface performance compared to using only the raw EEG or Savitzky-Golay filtered EEG across multiple sessions.

  16. The Juvenile Hafnium Isotope Signal as a Record of Supercontinent Cycles.

    PubMed

    Gardiner, Nicholas J; Kirkland, Christopher L; Van Kranendonk, Martin J

    2016-12-07

    Hf isotope ratios measured in igneous zircon are controlled by magmatic source, which may be linked to tectonic setting. Over the 200-500 Myr periodicity of the supercontinent cycle - the principal geological phenomenon controlling prevailing global tectonic style - juvenile Hf signals, i.e. most radiogenic, are typically measured in zircon from granites formed in arc settings (crustal growth), and evolved zircon Hf signals in granites formed in continent-collision settings (crustal reworking). Interrogations of Hf datasets for excursions related to Earth events commonly use the median value, however this may be equivocal due to magma mixing. The most juvenile part of the Hf signal is less influenced by crustal in-mixing, and arguably a more sensitive archive of Earth's geodynamic state. We analyze the global Hf dataset for this juvenile signal, statistically correlating supercontinent amalgamation intervals with evolved Hf episodes, and breakup leading to re-assembly with juvenile Hf episodes. The juvenile Hf signal is more sensitive to Pangaea and Rodinia assembly, its amplitude increasing with successive cycles to a maximum with Gondwana assembly which may reflect enhanced subduction-erosion. We demonstrate that the juvenile Hf signal carries important information on prevailing global magmatic style, and thus tectonic processes.

  17. The Juvenile Hafnium Isotope Signal as a Record of Supercontinent Cycles

    NASA Astrophysics Data System (ADS)

    Gardiner, Nicholas J.; Kirkland, Christopher L.; van Kranendonk, Martin J.

    2016-12-01

    Hf isotope ratios measured in igneous zircon are controlled by magmatic source, which may be linked to tectonic setting. Over the 200-500 Myr periodicity of the supercontinent cycle - the principal geological phenomenon controlling prevailing global tectonic style - juvenile Hf signals, i.e. most radiogenic, are typically measured in zircon from granites formed in arc settings (crustal growth), and evolved zircon Hf signals in granites formed in continent-collision settings (crustal reworking). Interrogations of Hf datasets for excursions related to Earth events commonly use the median value, however this may be equivocal due to magma mixing. The most juvenile part of the Hf signal is less influenced by crustal in-mixing, and arguably a more sensitive archive of Earth’s geodynamic state. We analyze the global Hf dataset for this juvenile signal, statistically correlating supercontinent amalgamation intervals with evolved Hf episodes, and breakup leading to re-assembly with juvenile Hf episodes. The juvenile Hf signal is more sensitive to Pangaea and Rodinia assembly, its amplitude increasing with successive cycles to a maximum with Gondwana assembly which may reflect enhanced subduction-erosion. We demonstrate that the juvenile Hf signal carries important information on prevailing global magmatic style, and thus tectonic processes.

  18. Cognitive measure on different profiles.

    PubMed

    Spindola, Marilda; Carra, Giovani; Balbinot, Alexandre; Zaro, Milton A

    2010-01-01

    Based on neurology and cognitive science many studies are developed to understand the human model mental, getting to know how human cognition works, especially about learning processes that involve complex contents and spatial-logical reasoning. Event Related Potential - ERP - is a basic and non-invasive method of electrophysiological investigation. It can be used to assess aspects of human cognitive processing by changing the rhythm of the frequency bands brain indicate that some type of processing or neuronal behavior. This paper focuses on ERP technique to help understand cognitive pathway in subjects from different areas of knowledge when they are exposed to an external visual stimulus. In the experiment we used 2D and 3D visual stimulus in the same picture. The signals were captured using 10 (ten) Electroencephalogram - EEG - channel system developed for this project and interfaced in a ADC (Analogical Digital System) board with LabVIEW system - National Instruments. That research was performed using project of experiments technique - DOE. The signal processing were done (math and statistical techniques) showing the relationship between cognitive pathway by groups and intergroups.

  19. Adaptive regularization network based neural modeling paradigm for nonlinear adaptive estimation of cerebral evoked potentials.

    PubMed

    Zhang, Jian-Hua; Böhme, Johann F

    2007-11-01

    In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.

  20. Processing RoxAnn sonar data to improve its categorization of lake bed surficial sediments

    USGS Publications Warehouse

    Cholwek, Gary; Bonde, John; Li, Xing; Richards, Carl; Yin, Karen

    2000-01-01

    To categorize spawning and nursery habitat for lake trout in Minnesota's near shore waters of Lake Superior, data was collected with a single beam echo sounder coupled with a RoxAnn bottom classification sensor. Test areas representative of different bottom surficial substrates were sampled. The collected data consisted of acoustic signals which showed both depth and substrate type. The location of the signals was tagged in real-time with a DGPS. All data was imported into a GIS database. To better interpret the output signal from the RoxAnn, several pattern classifiers were developed by multivariate statistical method. From the data a detailed and accurate map of lake bed bathymetry and surficial substrate types was produced. This map will be of great value to fishery and other natural resource managers.

  1. Vibration and Noise in Magnetic Resonance Imaging of the Vocal Tract: Differences between Whole-Body and Open-Air Devices.

    PubMed

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan

    2018-04-05

    This article compares open-air and whole-body magnetic resonance imaging (MRI) equipment working with a weak magnetic field as regards the methods of its generation, spectral properties of mechanical vibration and acoustic noise produced by gradient coils during the scanning process, and the measured noise intensity. These devices are used for non-invasive MRI reconstruction of the human vocal tract during phonation with simultaneous speech recording. In this case, the vibration and noise have negative influence on quality of speech signal. Two basic measurement experiments were performed within the paper: mapping sound pressure levels in the MRI device vicinity and picking up vibration and noise signals in the MRI scanning area. Spectral characteristics of these signals are then analyzed statistically and compared visually and numerically.

  2. Space Weather Effects on Mid-Latitude Railways: a Statistical Study of Anomalies observed in the Operation of Signaling and Train Control Equipment on the East-Siberian Railway

    NASA Astrophysics Data System (ADS)

    Kasinskii, V. V.; Ptitsyna, N. G.; Lyahov, N. N.; Dorman, L. I.; Villoresi, G.; Iucci, N.

    The end result of a long chain of space weather events beginning on the Sun is the induction of currents in ground-based long conductors as power lines pipelines and railways Intense geomagnetically induced currents GIC can hamper rail traffic by disturbing signaling and train control systems In few cases induced voltages were believed to have affected signaling equipment in Sweden Jansen et al 2000 and in the North of Russia Belov et al 2005 GIC threats have been a concern for technological systems at high-latitude locations due to disturbances driven by electrojet intensifications However other geomagnetic storm processes such as SSC and ring current enhancement can also cause GIC concerns for the technological systems Objective of this report is to continue our research Ptitsyna et al 2005 on possible influence of geomagnetic storms on mid-latitude railways and to perform a statistical research in addition to case studies This will help in providing a basis for railway companies to evaluate the risk of disruption to signaling and train control equipment and devise engineering solutions In the present report we analyzed anomalies in operation of automatic signaling and train control equipment occurred in 2004-2005 on the East-Siberian Railway located at mid-latitudes latitudes 51N-56N longitudes 96E-114E The anomalies consist mainly in unstable functioning and false operations in traffic automatic control systems rail chain switches locomotive control devices etc often resulting in false engagement of railway

  3. A unified framework for physical print quality

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed; Cooper, Brian; Rippetoe, Ed

    2007-01-01

    In this paper we present a unified framework for physical print quality. This framework includes a design for a testbed, testing methodologies and quality measures of physical print characteristics. An automatic belt-fed flatbed scanning system is calibrated to acquire L* data for a wide range of flat field imagery. Testing methodologies based on wavelet pre-processing and spectral/statistical analysis are designed. We apply the proposed framework to three common printing artifacts: banding, jitter, and streaking. Since these artifacts are directional, wavelet based approaches are used to extract one artifact at a time and filter out other artifacts. Banding is characterized as a medium-to-low frequency, vertical periodic variation down the page. The same definition is applied to the jitter artifact, except that the jitter signal is characterized as a high-frequency signal above the banding frequency range. However, streaking is characterized as a horizontal aperiodic variation in the high-to-medium frequency range. Wavelets at different levels are applied to the input images in different directions to extract each artifact within specified frequency bands. Following wavelet reconstruction, images are converted into 1-D signals describing the artifact under concern. Accurate spectral analysis using a DFT with Blackman-Harris windowing technique is used to extract the power (strength) of periodic signals (banding and jitter). Since streaking is an aperiodic signal, a statistical measure is used to quantify the streaking strength. Experiments on 100 print samples scanned at 600 dpi from 10 different printers show high correlation (75% to 88%) between the ranking of these samples by the proposed metrologies and experts' visual ranking.

  4. Advanced signal processing based on support vector regression for lidar applications

    NASA Astrophysics Data System (ADS)

    Gelfusa, M.; Murari, A.; Malizia, A.; Lungaroni, M.; Peluso, E.; Parracino, S.; Talebzadeh, S.; Vega, J.; Gaudio, P.

    2015-10-01

    The LIDAR technique has recently found many applications in atmospheric physics and remote sensing. One of the main issues, in the deployment of systems based on LIDAR, is the filtering of the backscattered signal to alleviate the problems generated by noise. Improvement in the signal to noise ratio is typically achieved by averaging a quite large number (of the order of hundreds) of successive laser pulses. This approach can be effective but presents significant limitations. First of all, it implies a great stress on the laser source, particularly in the case of systems for automatic monitoring of large areas for long periods. Secondly, this solution can become difficult to implement in applications characterised by rapid variations of the atmosphere, for example in the case of pollutant emissions, or by abrupt changes in the noise. In this contribution, a new method for the software filtering and denoising of LIDAR signals is presented. The technique is based on support vector regression. The proposed new method is insensitive to the statistics of the noise and is therefore fully general and quite robust. The developed numerical tool has been systematically compared with the most powerful techniques available, using both synthetic and experimental data. Its performances have been tested for various statistical distributions of the noise and also for other disturbances of the acquired signal such as outliers. The competitive advantages of the proposed method are fully documented. The potential of the proposed approach to widen the capability of the LIDAR technique, particularly in the detection of widespread smoke, is discussed in detail.

  5. On adaptive robustness approach to Anti-Jam signal processing

    NASA Astrophysics Data System (ADS)

    Poberezhskiy, Y. S.; Poberezhskiy, G. Y.

    An effective approach to exploiting statistical differences between desired and jamming signals named adaptive robustness is proposed and analyzed in this paper. It combines conventional Bayesian, adaptive, and robust approaches that are complementary to each other. This combining strengthens the advantages and mitigates the drawbacks of the conventional approaches. Adaptive robustness is equally applicable to both jammers and their victim systems. The capabilities required for realization of adaptive robustness in jammers and victim systems are determined. The employment of a specific nonlinear robust algorithm for anti-jam (AJ) processing is described and analyzed. Its effectiveness in practical situations has been proven analytically and confirmed by simulation. Since adaptive robustness can be used by both sides in electronic warfare, it is more advantageous for the fastest and most intelligent side. Many results obtained and discussed in this paper are also applicable to commercial applications such as communications in unregulated or poorly regulated frequency ranges and systems with cognitive capabilities.

  6. ATS-6 - Preliminary results from the 13/18-GHz COMSAT Propagation Experiment

    NASA Technical Reports Server (NTRS)

    Hyde, G.

    1975-01-01

    The 13/18-GHz COMSAT Propagation Experiment (CPE) is reviewed, the data acquisition and processing are discussed, and samples of preliminary results are presented. The need for measurements of both hydrometeor-induced attenuation statistics and diversity effectiveness is brought out. The facilitation of the experiment - CPE dual frequency and diversity site location, the CPE ground transmit terminals, the CPE transponder on Applications Technology Satellite-6 (ATS-6), and the CPE receive and data acquisition system - is briefly examined. The on-line preprocessing of the received signal is reviewed, followed by a discussion of the off-line processing of this database to remove signal fluctuations not due to hydrometeors. Finally, samples of the results of first-level analysis of the resultant data for the 18-GHz diversity site near Boston, Mass., and for the dual frequency 13/18-GHz site near Detroit, Mich., are presented and discussed.

  7. Overarching framework for data-based modelling

    NASA Astrophysics Data System (ADS)

    Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco

    2014-02-01

    One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.

  8. Online Denoising Based on the Second-Order Adaptive Statistics Model.

    PubMed

    Yi, Sheng-Lun; Jin, Xue-Bo; Su, Ting-Li; Tang, Zhen-Yun; Wang, Fa-Fa; Xiang, Na; Kong, Jian-Lei

    2017-07-20

    Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule-Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.

  9. Comparing diffuse optical tomography and functional magnetic resonance imaging signals during a cognitive task: pilot study

    PubMed Central

    Hernández-Martin, Estefania; Marcano, Francisco; Casanova, Oscar; Modroño, Cristian; Plata-Bello, Julio; González-Mora, Jose Luis

    2017-01-01

    Abstract. Diffuse optical tomography (DOT) measures concentration changes in both oxy- and deoxyhemoglobin providing three-dimensional images of local brain activations. A pilot study, which compares both DOT and functional magnetic resonance imaging (fMRI) volumes through t-maps given by canonical statistical parametric mapping (SPM) processing for both data modalities, is presented. The DOT series were processed using a method that is based on a Bayesian filter application on raw DOT data to remove physiological changes and minimum description length application index to select a number of singular values, which reduce the data dimensionality during image reconstruction and adaptation of DOT volume series to normalized standard space. Therefore, statistical analysis is performed with canonical SPM software in the same way as fMRI analysis is done, accepting DOT volumes as if they were fMRI volumes. The results show the reproducibility and ruggedness of the method to process DOT series on group analysis using cognitive paradigms on the prefrontal cortex. Difficulties such as the fact that scalp–brain distances vary between subjects or cerebral activations are difficult to reproduce due to strategies used by the subjects to solve arithmetic problems are considered. T-images given by fMRI and DOT volume series analyzed in SPM show that at the functional level, both DOT and fMRI measures detect the same areas, although DOT provides complementary information to fMRI signals about cerebral activity. PMID:28386575

  10. A Survey of Statistical Models for Reverse Engineering Gene Regulatory Networks

    PubMed Central

    Huang, Yufei; Tienda-Luna, Isabel M.; Wang, Yufeng

    2009-01-01

    Statistical models for reverse engineering gene regulatory networks are surveyed in this article. To provide readers with a system-level view of the modeling issues in this research, a graphical modeling framework is proposed. This framework serves as the scaffolding on which the review of different models can be systematically assembled. Based on the framework, we review many existing models for many aspects of gene regulation; the pros and cons of each model are discussed. In addition, network inference algorithms are also surveyed under the graphical modeling framework by the categories of point solutions and probabilistic solutions and the connections and differences among the algorithms are provided. This survey has the potential to elucidate the development and future of reverse engineering GRNs and bring statistical signal processing closer to the core of this research. PMID:20046885

  11. Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test

    NASA Astrophysics Data System (ADS)

    Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.

    We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.

  12. A quasi-likelihood approach to non-negative matrix factorization

    PubMed Central

    Devarajan, Karthik; Cheung, Vincent C.K.

    2017-01-01

    A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511

  13. An Examination of Application of Artificial Neural Network in Cognitive Radios

    NASA Astrophysics Data System (ADS)

    Bello Salau, H.; Onwuka, E. N.; Aibinu, A. M.

    2013-12-01

    Recent advancement in software radio technology has led to the development of smart device known as cognitive radio. This type of radio fuses powerful techniques taken from artificial intelligence, game theory, wideband/multiple antenna techniques, information theory and statistical signal processing to create an outstanding dynamic behavior. This cognitive radio is utilized in achieving diverse set of applications such as spectrum sensing, radio parameter adaptation and signal classification. This paper contributes by reviewing different cognitive radio implementation that uses artificial intelligence such as the hidden markov models, metaheuristic algorithm and artificial neural networks (ANNs). Furthermore, different areas of application of ANNs and their performance metrics based approach are also examined.

  14. NMR measurements on the triumf polarized deuterium target

    NASA Astrophysics Data System (ADS)

    Wait, G. D.; Delheij, P. P. J.; Healey, D. C.

    1989-01-01

    The NMR system and the polarization measurements of the TRIUMF polarized deuterium target are described. The NMR tuned circuit is kept resonant at all frequencies to allow for a high gain in the rf amplifiers and to permit the measurement of the real part of the NMR signal. A Starburst (J-11) and an LSI-11 are used to process the NMR signal at a high sample rate. A synthesizer under the control of an 8085 based microprocessor (TRIMAC) provides the rf sweep. Vector polarizations between -0.33 and -0.40 were routinely achieved during a four week run in the TRIUMF M11 pion beamline with a statistical uncertainty of 3.4% and a systematic uncertainty of 5%.

  15. Design and evaluation of a parametric model for cardiac sounds.

    PubMed

    Ibarra-Hernández, Roilhi F; Alonso-Arévalo, Miguel A; Cruz-Gutiérrez, Alejandro; Licona-Chávez, Ana L; Villarreal-Reyes, Salvador

    2017-10-01

    Heart sound analysis plays an important role in the auscultative diagnosis process to detect the presence of cardiovascular diseases. In this paper we propose a novel parametric heart sound model that accurately represents normal and pathological cardiac audio signals, also known as phonocardiograms (PCG). The proposed model considers that the PCG signal is formed by the sum of two parts: one of them is deterministic and the other one is stochastic. The first part contains most of the acoustic energy. This part is modeled by the Matching Pursuit (MP) algorithm, which performs an analysis-synthesis procedure to represent the PCG signal as a linear combination of elementary waveforms. The second part, also called residual, is obtained after subtracting the deterministic signal from the original heart sound recording and can be accurately represented as an autoregressive process using the Linear Predictive Coding (LPC) technique. We evaluate the proposed heart sound model by performing subjective and objective tests using signals corresponding to different pathological cardiac sounds. The results of the objective evaluation show an average Percentage of Root-Mean-Square Difference of approximately 5% between the original heart sound and the reconstructed signal. For the subjective test we conducted a formal methodology for perceptual evaluation of audio quality with the assistance of medical experts. Statistical results of the subjective evaluation show that our model provides a highly accurate approximation of real heart sound signals. We are not aware of any previous heart sound model rigorously evaluated as our proposal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Golay Complementary Waveforms in Reed–Müller Sequences for Radar Detection of Nonzero Doppler Targets

    PubMed Central

    Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill

    2018-01-01

    Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708

  17. The Role of Margin in Link Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cheung, K.

    2015-01-01

    Link analysis is a system engineering process in the design, development, and operation of communication systems and networks. Link models that are mathematical abstractions representing the useful signal power and the undesirable noise and attenuation effects (including weather effects if the signal path transverses through the atmosphere) that are integrated into the link budget calculation that provides the estimates of signal power and noise power at the receiver. Then the link margin is applied which attempts to counteract the fluctuations of the signal and noise power to ensure reliable data delivery from transmitter to receiver. (Link margin is dictated by the link margin policy or requirements.) A simple link budgeting approach assumes link parameters to be deterministic values typically adopted a rule-of-thumb policy of 3 dB link margin. This policy works for most S- and X-band links due to their insensitivity to weather effects. But for higher frequency links like Ka-band, Ku-band, and optical communication links, it is unclear if a 3 dB link margin would guarantee link closure. Statistical link analysis that adopted the 2-sigma or 3-sigma link margin incorporates link uncertainties in the sigma calculation. (The Deep Space Network (DSN) link margin policies are 2-sigma for downlink and 3-sigma for uplink.) The link reliability can therefore be quantified statistically even for higher frequency links. However in the current statistical link analysis approach, link reliability is only expressed as the likelihood of exceeding the signal-to-noise ratio (SNR) threshold that corresponds to a given bit-error-rate (BER) or frame-error-rate (FER) requirement. The method does not provide the true BER or FER estimate of the link with margin, or the required signalto-noise ratio (SNR) that would meet the BER or FER requirement in the statistical sense. In this paper, we perform in-depth analysis on the relationship between BER/FER requirement, operating SNR, and coding performance curve, in the case when the channel coherence time of link fluctuation is comparable or larger than the time duration of a codeword. We compute the "true" SNR design point that would meet the BER/FER requirement by taking into account the fluctuation of signal power and noise power at the receiver, and the shape of the coding performance curve. This analysis yields a number of valuable insights on the design choices of coding scheme and link margin for the reliable data delivery of a communication system - space and ground. We illustrate the aforementioned analysis using a number of standard NASA error-correcting codes.

  18. Statistical analysis of traversal behavior under different types of traffic lights

    NASA Astrophysics Data System (ADS)

    Wang, Boran; Wang, Ziyang; Li, Zhiyin

    2017-12-01

    According to the video observation, it is found that the traffic signal type signal has a significant effect on the illegal crossing behavior of pedestrians at the intersection. Through the method of statistical analysis and variance analysis, the difference between the violation rate and the waiting position of pedestrians at different intersecting lights is compared, and the influence of traffic signal type on pedestrian crossing behavior is evaluated. The results show that the violation rate of the intersection of the static pedestrian lights is significantly higher than that of the countdown signal lights. There are significant differences in the waiting position of the intersection of different signal lights.

  19. Signal extraction and wave field separation in tunnel seismic prediction by independent component analysis

    NASA Astrophysics Data System (ADS)

    Yue, Y.; Jiang, T.; Zhou, Q.

    2017-12-01

    In order to ensure the rationality and the safety of tunnel excavation, the advanced geological prediction has been become an indispensable step in tunneling. However, the extraction of signal and the separation of P and S waves directly influence the accuracy of geological prediction. Generally, the raw data collected in TSP system is low quality because of the numerous disturb factors in tunnel projects, such as the power interference and machine vibration interference. It's difficult for traditional method (band-pass filtering) to remove interference effectively as well as bring little loss to signal. The power interference, machine vibration interference and the signal are original variables and x, y, z component as observation signals, each component of the representation is a linear combination of the original variables, which satisfy applicable conditions of independent component analysis (ICA). We perform finite-difference simulations of elastic wave propagation to synthetic a tunnel seismic reflection record. The method of ICA was adopted to process the three-component data, and the results show that extract the estimates of signal and the signals are highly correlated (the coefficient correlation is up to more than 0.93). In addition, the estimates of interference that separated from ICA and the interference signals are also highly correlated, and the coefficient correlation is up to more than 0.99. Thus, simulation results showed that the ICA is an ideal method for extracting high quality data from mixed signals. For the separation of P and S waves, the conventional separation techniques are based on physical characteristics of wave propagation, which require knowledge of the near-surface P and S waves velocities and density. Whereas the ICA approach is entirely based on statistical differences between P and S waves, and the statistical technique does not require a priori information. The concrete results of the wave field separation will be presented in the meeting. In summary, we can safely draw the conclusion that ICA can not only extract high quality data from the mixed signals, but also can separate P and S waves effectively.

  20. The Effect of Photon Statistics and Pulse Shaping on the Performance of the Wiener Filter Crystal Identification Algorithm Applied to LabPET Phoswich Detectors

    NASA Astrophysics Data System (ADS)

    Yousefzadeh, Hoorvash Camilia; Lecomte, Roger; Fontaine, Réjean

    2012-06-01

    A fast Wiener filter-based crystal identification (WFCI) algorithm was recently developed to discriminate crystals with close scintillation decay times in phoswich detectors. Despite the promising performance of WFCI, the influence of various physical factors and electrical noise sources of the data acquisition chain (DAQ) on the crystal identification process was not fully investigated. This paper examines the effect of different noise sources, such as photon statistics, avalanche photodiode (APD) excess multiplication noise, and front-end electronic noise, as well as the influence of different shaping filters on the performance of the WFCI algorithm. To this end, a PET-like signal simulator based on a model of the LabPET DAQ, a small animal APD-based digital PET scanner, was developed. Simulated signals were generated under various noise conditions with CR-RC shapers of order 1, 3, and 5 having different time constants (τ). Applying the WFCI algorithm to these simulated signals showed that the non-stationary Poisson photon statistics is the main contributor to the identification error of WFCI algorithm. A shaping filter of order 1 with τ = 50 ns yielded the best WFCI performance (error 1%), while a longer shaping time of τ = 100 ns slightly degraded the WFCI performance (error 3%). Filters of higher orders with fast shaping time constants (10-33 ns) also produced good WFCI results (error 1.4% to 1.6%). This study shows the advantage of the pulse simulator in evaluating various DAQ conditions and confirms the influence of the detection chain on the WFCI performance.

  1. Identification of Major Histocompatibility Complex-Regulated Body Odorants by Statistical Analysis of a Comparative Gas Chromatography/Mass Spectrometry Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willse, Alan R.; Belcher, Ann; Preti, George

    2005-04-15

    Gas chromatography (GC), combined with mass spectrometry (MS) detection, is a powerful analytical technique that can be used to separate, quantify, and identify volatile compounds in complex mixtures. This paper examines the application of GC-MS in a comparative experiment to identify volatiles that differ in concentration between two groups. A complex mixture might comprise several hundred or even thousands of volatile compounds. Because their number and location in a chromatogram generally are unknown, and because components overlap in populous chromatograms, the statistical problems offer significant challenges beyond traditional two-group screening procedures. We describe a statistical procedure to compare two-dimensional GC-MSmore » profiles between groups, which entails (1) signal processing: baseline correction and peak detection in single ion chromatograms; (2) aligning chromatograms in time; (3) normalizing differences in overall signal intensities; and (4) detecting chromatographic regions that differ between groups. Compared to existing approaches, the proposed method is robust to errors made at earlier stages of analysis, such as missed peaks or slightly misaligned chromatograms. To illustrate the method, we identify differences in GC-MS chromatograms of ether-extracted urine collected from two nearly identical inbred groups of mice, to investigate the relationship between odor and genetics of the major histocompatibility complex.« less

  2. Normative evidence accumulation in unpredictable environments

    PubMed Central

    Glaze, Christopher M; Kable, Joseph W; Gold, Joshua I

    2015-01-01

    In our dynamic world, decisions about noisy stimuli can require temporal accumulation of evidence to identify steady signals, differentiation to detect unpredictable changes in those signals, or both. Normative models can account for learning in these environments but have not yet been applied to faster decision processes. We present a novel, normative formulation of adaptive learning models that forms decisions by acting as a leaky accumulator with non-absorbing bounds. These dynamics, derived for both discrete and continuous cases, depend on the expected rate of change of the statistics of the evidence and balance signal identification and change detection. We found that, for two different tasks, human subjects learned these expectations, albeit imperfectly, then used them to make decisions in accordance with the normative model. The results represent a unified, empirically supported account of decision-making in unpredictable environments that provides new insights into the expectation-driven dynamics of the underlying neural signals. DOI: http://dx.doi.org/10.7554/eLife.08825.001 PMID:26322383

  3. Modeling the Atmospheric Phase Effects of a Digital Antenna Array Communications System

    NASA Technical Reports Server (NTRS)

    Tkacenko, A.

    2006-01-01

    In an antenna array system such as that used in the Deep Space Network (DSN) for satellite communication, it is often necessary to account for the effects due to the atmosphere. Typically, the atmosphere induces amplitude and phase fluctuations on the transmitted downlink signal that invalidate the assumed stationarity of the signal model. The degree to which these perturbations affect the stationarity of the model depends both on parameters of the atmosphere, including wind speed and turbulence strength, and on parameters of the communication system, such as the sampling rate used. In this article, we focus on modeling the atmospheric phase fluctuations in a digital antenna array communications system. Based on a continuous-time statistical model for the atmospheric phase effects, we show how to obtain a related discrete-time model based on sampling the continuous-time process. The effects of the nonstationarity of the resulting signal model are investigated using the sample matrix inversion (SMI) algorithm for minimum mean-squared error (MMSE) equalization of the received signal

  4. Brain-computer interface using wavelet transformation and naïve bayes classifier.

    PubMed

    Bassani, Thiago; Nievola, Julio Cesar

    2010-01-01

    The main purpose of this work is to establish an exploratory approach using electroencephalographic (EEG) signal, analyzing the patterns in the time-frequency plane. This work also aims to optimize the EEG signal analysis through the improvement of classifiers and, eventually, of the BCI performance. In this paper a novel exploratory approach for data mining of EEG signal based on continuous wavelet transformation (CWT) and wavelet coherence (WC) statistical analysis is introduced and applied. The CWT allows the representation of time-frequency patterns of the signal's information content by WC qualiatative analysis. Results suggest that the proposed methodology is capable of identifying regions in time-frequency spectrum during the specified task of BCI. Furthermore, an example of a region is identified, and the patterns are classified using a Naïve Bayes Classifier (NBC). This innovative characteristic of the process justifies the feasibility of the proposed approach to other data mining applications. It can open new physiologic researches in this field and on non stationary time series analysis.

  5. Noiseless amplification of weak coherent fields exploiting energy fluctuations of the field

    NASA Astrophysics Data System (ADS)

    Partanen, Mikko; Häyrynen, Teppo; Oksanen, Jani; Tulkki, Jukka

    2012-12-01

    Quantum optics dictates that amplification of a pure state by any linear deterministic amplifier always introduces noise in the signal and results in a mixed output state. However, it has recently been shown that noiseless amplification becomes possible if the requirement of a deterministic operation is relaxed. Here we propose and analyze a noiseless amplification scheme where the energy required to amplify the signal originates from the stochastic fluctuations in the field itself. In contrast to previous amplification setups, our setup shows that a signal can be amplified even if no energy is added to the signal from external sources. We investigate the relation between the amplification and its success rate as well as the statistics of the output states after successful and failed amplification processes. Furthermore, we also optimize the setup to find the maximum success rates in terms of the reflectivities of the beam splitters used in the setup and discuss the relation of our setup with the previous setups.

  6. Double-path acquisition of pulse wave transit time and heartbeat using self-mixing interferometry

    NASA Astrophysics Data System (ADS)

    Wei, Yingbin; Huang, Wencai; Wei, Zheng; Zhang, Jie; An, Tong; Wang, Xiulin; Xu, Huizhen

    2017-06-01

    We present a technique based on self-mixing interferometry for acquiring the pulse wave transit time (PWTT) and heartbeat. A signal processing method based on Continuous Wavelet Transform and Hilbert Transform is applied to extract potentially useful information in the self-mixing interference (SMI) signal, including PWTT and heartbeat. Then, some cardiovascular characteristics of the human body are easily acquired without retrieving the SMI signal by complicated algorithms. Experimentally, the PWTT is measured on the finger and the toe of the human body using double-path self-mixing interferometry. Experimental statistical data show the relation between the PWTT and blood pressure, which can be used to estimate the systolic pressure value by fitting. Moreover, the measured heartbeat shows good agreement with that obtained by a photoplethysmography sensor. The method that we demonstrate, which is based on self-mixing interferometry with significant advantages of simplicity, compactness and non-invasion, effectively illustrates the viability of the SMI technique for measuring other cardiovascular signals.

  7. A triaxial accelerometer-based physical-activity recognition via augmented-signal features and a hierarchical recognizer.

    PubMed

    Khan, Adil Mehmood; Lee, Young-Koo; Lee, Sungyoung Y; Kim, Tae-Seong

    2010-09-01

    Physical-activity recognition via wearable sensors can provide valuable information regarding an individual's degree of functional ability and lifestyle. In this paper, we present an accelerometer sensor-based approach for human-activity recognition. Our proposed recognition method uses a hierarchical scheme. At the lower level, the state to which an activity belongs, i.e., static, transition, or dynamic, is recognized by means of statistical signal features and artificial-neural nets (ANNs). The upper level recognition uses the autoregressive (AR) modeling of the acceleration signals, thus, incorporating the derived AR-coefficients along with the signal-magnitude area and tilt angle to form an augmented-feature vector. The resulting feature vector is further processed by the linear-discriminant analysis and ANNs to recognize a particular human activity. Our proposed activity-recognition method recognizes three states and 15 activities with an average accuracy of 97.9% using only a single triaxial accelerometer attached to the subject's chest.

  8. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    NASA Astrophysics Data System (ADS)

    D'Amico, G.; Amodeo, A.; Mattis, I.; Freudenthaler, V.; Pappalardo, G.

    2015-10-01

    In this paper we describe an automatic tool for the pre-processing of lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. The ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, the ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. The ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of the ELPP module, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of the ELPP module is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of the ELPP module. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. The ELPP module has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  9. Information theory analysis of sensor-array imaging systems for computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  10. A Complex Approach to UXO Discrimination: Combining Advanced EMI Forward Models and Statistical Signal Processing

    DTIC Science & Technology

    2012-01-01

    discrimination at live-UXO sites. Namely, under this project first we developed and implemented advanced, physically complete forward EMI models such as, the...detection and discrimination at live-UXO sites. Namely, under this project first we developed and implemented advanced, physically complete forward EMI...Shubitidze of Sky Research and Dartmouth College, conceived, implemented , and tested most of the approaches presented in this report. He developed

  11. Statistical Analysis of Acoustic Signal Propagating Through the South China Sea Basin

    DTIC Science & Technology

    2016-03-01

    internal tidal constituents are observed in both spectra, and the diurnal (D) and semidiurnal (SD) internal waves ’ energy are strong. The spectrum is...bandwidths were utilized during the frequency smoothing process to ensure the reliability of the spectra in the meso-, tidal and internal wave scale...mooring temperature sensors capture the internal waves ’ energy, and six high amplitude peaks are observed in the spectra in the internal tidal band

  12. Linear Space-Variant Image Restoration of Photon-Limited Images

    DTIC Science & Technology

    1978-03-01

    levels of performance of the wavefront seisor. The parameter ^ represents the residual rms wavefront error ^measurement noise plus ♦ttting error...known to be optimum only when the signal and noise are uncorrelated stationary random processes «nd when the noise statistics are gaussian. In the...regime of photon-Iimited imaging, the noise is non-gaussian and signaI-dependent, and it is therefore reasonable to assume that tome form of linear

  13. Source Camera Identification and Blind Tamper Detections for Images

    DTIC Science & Technology

    2007-04-24

    measures and image quality measures in camera identification problem was studied using conjunction with a KNN classifier to identify the feature sets...shots varying from nature scenes .-.. motorala to close-ups of people. We experimented with the KNN *~. * ny classifier (K=5) as well SVM algorithm of...on Acoustic, Speech and Signal Processing (ICASSP), France, May 2006, vol. 5, pp. 401-404. [9] H. Farid and S. Lyu, "Higher-order wavelet statistics

  14. New method for enhanced efficiency in detection of gravitational waves from supernovae using coherent network of detectors

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.; Salazar, L.; Mittelstaedt, J.; Valdez, O.

    2017-11-01

    Supernovae in our universe are potential sources of gravitational waves (GW) that could be detected in a network of GW detectors like LIGO and Virgo. Core-collapse supernovae are rare, but the associated gravitational radiation is likely to carry profuse information about the underlying processes driving the supernovae. Calculations based on analytic models predict GW energies within the detection range of the Advanced LIGO detectors, out to tens of Mpc for certain types of signals e.g. coalescing binary neutron stars. For supernovae however, the corresponding distances are much less. Thus, methods that can improve the sensitivity of searches for GW signals from supernovae are desirable, especially in the advanced detector era. Several methods have been proposed based on various likelihood-based regulators that work on data from a network of detectors to detect burst-like signals (as is the case for signals from supernovae) from potential GW sources. To address this problem, we have developed an analysis pipeline based on a method of noise reduction known as the harmonic regeneration noise reduction (HRNR) algorithm. To demonstrate the method, sixteen supernova waveforms from the Murphy et al. 2009 catalog have been used in presence of LIGO science data. A comparative analysis is presented to show detection statistics for a standard network analysis as commonly used in GW pipelines and the same by implementing the new method in conjunction with the network. The result shows significant improvement in detection statistics.

  15. Nonlinear unitary quantum collapse model with self-generated noise

    NASA Astrophysics Data System (ADS)

    Geszti, Tamás

    2018-04-01

    Collapse models including some external noise of unknown origin are routinely used to describe phenomena on the quantum-classical border; in particular, quantum measurement. Although containing nonlinear dynamics and thereby exposed to the possibility of superluminal signaling in individual events, such models are widely accepted on the basis of fully reproducing the non-signaling statistical predictions of quantum mechanics. Here we present a deterministic nonlinear model without any external noise, in which randomness—instead of being universally present—emerges in the measurement process, from deterministic irregular dynamics of the detectors. The treatment is based on a minimally nonlinear von Neumann equation for a Stern–Gerlach or Bell-type measuring setup, containing coordinate and momentum operators in a self-adjoint skew-symmetric, split scalar product structure over the configuration space. The microscopic states of the detectors act as a nonlocal set of hidden parameters, controlling individual outcomes. The model is shown to display pumping of weights between setup-defined basis states, with a single winner randomly selected and the rest collapsing to zero. Environmental decoherence has no role in the scenario. Through stochastic modelling, based on Pearle’s ‘gambler’s ruin’ scheme, outcome probabilities are shown to obey Born’s rule under a no-drift or ‘fair-game’ condition. This fully reproduces quantum statistical predictions, implying that the proposed non-linear deterministic model satisfies the non-signaling requirement. Our treatment is still vulnerable to hidden signaling in individual events, which remains to be handled by future research.

  16. Model for neural signaling leap statistics

    NASA Astrophysics Data System (ADS)

    Chevrollier, Martine; Oriá, Marcos

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T = 37.5°C, awaken regime) and Lévy statistics (T = 35.5°C, sleeping period), characterized by rare events of long range connections.

  17. Control chart pattern recognition using RBF neural network with new training algorithm and practical features.

    PubMed

    Addeh, Abdoljalil; Khormali, Aminollah; Golilarz, Noorbakhsh Amiri

    2018-05-04

    The control chart patterns are the most commonly used statistical process control (SPC) tools to monitor process changes. When a control chart produces an out-of-control signal, this means that the process has been changed. In this study, a new method based on optimized radial basis function neural network (RBFNN) is proposed for control chart patterns (CCPs) recognition. The proposed method consists of four main modules: feature extraction, feature selection, classification and learning algorithm. In the feature extraction module, shape and statistical features are used. Recently, various shape and statistical features have been presented for the CCPs recognition. In the feature selection module, the association rules (AR) method has been employed to select the best set of the shape and statistical features. In the classifier section, RBFNN is used and finally, in RBFNN, learning algorithm has a high impact on the network performance. Therefore, a new learning algorithm based on the bees algorithm has been used in the learning module. Most studies have considered only six patterns: Normal, Cyclic, Increasing Trend, Decreasing Trend, Upward Shift and Downward Shift. Since three patterns namely Normal, Stratification, and Systematic are very similar to each other and distinguishing them is very difficult, in most studies Stratification and Systematic have not been considered. Regarding to the continuous monitoring and control over the production process and the exact type detection of the problem encountered during the production process, eight patterns have been investigated in this study. The proposed method is tested on a dataset containing 1600 samples (200 samples from each pattern) and the results showed that the proposed method has a very good performance. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Signals and circuits in the purkinje neuron.

    PubMed

    Abrams, Zéev R; Zhang, Xiang

    2011-01-01

    Purkinje neurons (PN) in the cerebellum have over 100,000 inputs organized in an orthogonal geometry, and a single output channel. As the sole output of the cerebellar cortex layer, their complex firing pattern has been associated with motor control and learning. As such they have been extensively modeled and measured using tools ranging from electrophysiology and neuroanatomy, to dynamic systems and artificial intelligence methods. However, there is an alternative approach to analyze and describe the neuronal output of these cells using concepts from electrical engineering, particularly signal processing and digital/analog circuits. By viewing the PN as an unknown circuit to be reverse-engineered, we can use the tools that provide the foundations of today's integrated circuits and communication systems to analyze the Purkinje system at the circuit level. We use Fourier transforms to analyze and isolate the inherent frequency modes in the PN and define three unique frequency ranges associated with the cells' output. Comparing the PN to a signal generator that can be externally modulated adds an entire level of complexity to the functional role of these neurons both in terms of data analysis and information processing, relying on Fourier analysis methods in place of statistical ones. We also re-describe some of the recent literature in the field, using the nomenclature of signal processing. Furthermore, by comparing the experimental data of the past decade with basic electronic circuitry, we can resolve the outstanding controversy in the field, by recognizing that the PN can act as a multivibrator circuit.

  19. Environmental statistics and optimal regulation.

    PubMed

    Sivak, David A; Thomson, Matt

    2014-09-01

    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies--such as constitutive expression or graded response--for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.

  20. The 2D analytic signal for envelope detection and feature extraction on ultrasound images.

    PubMed

    Wachinger, Christian; Klein, Tassilo; Navab, Nassir

    2012-08-01

    The fundamental property of the analytic signal is the split of identity, meaning the separation of qualitative and quantitative information in form of the local phase and the local amplitude, respectively. Especially the structural representation, independent of brightness and contrast, of the local phase is interesting for numerous image processing tasks. Recently, the extension of the analytic signal from 1D to 2D, covering also intrinsic 2D structures, was proposed. We show the advantages of this improved concept on ultrasound RF and B-mode images. Precisely, we use the 2D analytic signal for the envelope detection of RF data. This leads to advantages for the extraction of the information-bearing signal from the modulated carrier wave. We illustrate this, first, by visual assessment of the images, and second, by performing goodness-of-fit tests to a Nakagami distribution, indicating a clear improvement of statistical properties. The evaluation is performed for multiple window sizes and parameter estimation techniques. Finally, we show that the 2D analytic signal allows for an improved estimation of local features on B-mode images. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. An automated multi-scale network-based scheme for detection and location of seismic sources

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.

    2017-12-01

    We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.

  2. A system for multichannel recording and automatic reading of information. [for onboard cosmic ray counter

    NASA Technical Reports Server (NTRS)

    Bogomolov, E. A.; Yevstafev, Y. Y.; Karakadko, V. K.; Lubyanaya, N. D.; Romanov, V. A.; Totubalina, M. G.; Yamshchikov, M. A.

    1975-01-01

    A system for the recording and processing of telescope data is considered for measurements of EW asymmetry. The information is recorded by 45 channels on a continuously moving 35-mm film. The dead time of the recorder is about 0.1 sec. A sorting electronic circuit is used to reduce the errors when the statistical time distribution of the pulses is recorded. The recorded information is read out by means of photoresistors. The phototransmitter signals are fed either to the mechanical recorder unit for preliminary processing, or to a logical circuit which controls the operation of the punching device. The punched tape is processed by an electronic computer.

  3. An insight into the adsorption and electrochemical processes occurring during the analysis of copper and lead in wines, using an electrochemical quartz crystal nanobalance.

    PubMed

    Yamasaki, Alzira; Oliveira, João A B P; Duarte, Armando C; Gomes, M Teresa S R

    2012-08-30

    Copper and lead in wine were quantified by anodic stripping voltammetry (ASV), performed onto the gold electrode of a piezoelectric quartz crystal. Both current or mass changes could be used as analytical signals, without a statistical difference in the results (α=0.05). However, the plot of mass vs. potential provided an in depth understanding of the electrochemical processes and allowed studying adsorption phenomena. Copper interaction with fructose is an example of a process which was not possible to ignore by observing the mass change on the gold electrode of the piezoelectric quartz crystal. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Adaptive tracking of a time-varying field with a quantum sensor

    NASA Astrophysics Data System (ADS)

    Bonato, Cristian; Berry, Dominic W.

    2017-05-01

    Sensors based on single spins can enable magnetic-field detection with very high sensitivity and spatial resolution. Previous work has concentrated on sensing of a constant magnetic field or a periodic signal. Here, we instead investigate the problem of estimating a field with nonperiodic variation described by a Wiener process. We propose and study, by numerical simulations, an adaptive tracking protocol based on Bayesian estimation. The tracking protocol updates the probability distribution for the magnetic field based on measurement outcomes and adapts the choice of sensing time and phase in real time. By taking the statistical properties of the signal into account, our protocol strongly reduces the required measurement time. This leads to a reduction of the error in the estimation of a time-varying signal by up to a factor of four compare with protocols that do not take this information into account.

  5. Wearable ear EEG for brain interfacing

    NASA Astrophysics Data System (ADS)

    Schroeder, Eric D.; Walker, Nicholas; Danko, Amanda S.

    2017-02-01

    Brain-computer interfaces (BCIs) measuring electrical activity via electroencephalogram (EEG) have evolved beyond clinical applications to become wireless consumer products. Typically marketed for meditation and neu- rotherapy, these devices are limited in scope and currently too obtrusive to be a ubiquitous wearable. Stemming from recent advancements made in hearing aid technology, wearables have been shrinking to the point that the necessary sensors, circuitry, and batteries can be fit into a small in-ear wearable device. In this work, an ear-EEG device is created with a novel system for artifact removal and signal interpretation. The small, compact, cost-effective, and discreet device is demonstrated against existing consumer electronics in this space for its signal quality, comfort, and usability. A custom mobile application is developed to process raw EEG from each device and display interpreted data to the user. Artifact removal and signal classification is accomplished via a combination of support matrix machines (SMMs) and soft thresholding of relevant statistical properties.

  6. Coincidence probabilities for spacecraft gravitational wave experiments - Massive coalescing binaries

    NASA Technical Reports Server (NTRS)

    Tinto, Massimo; Armstrong, J. W.

    1991-01-01

    Massive coalescing binary systems are candidate sources of gravitational radiation in the millihertz frequency band accessible to spacecraft Doppler tracking experiments. This paper discusses signal processing and detection probability for waves from coalescing binaries in the regime where the signal frequency increases linearly with time, i.e., 'chirp' signals. Using known noise statistics, thresholds with given false alarm probabilities are established for one- and two-spacecraft experiments. Given the threshold, the detection probability is calculated as a function of gravitational wave amplitude for both one- and two-spacecraft experiments, assuming random polarization states and under various assumptions about wave directions. This allows quantitative statements about the detection efficiency of these experiments and the utility of coincidence experiments. In particular, coincidence probabilities for two-spacecraft experiments are insensitive to the angle between the directions to the two spacecraft, indicating that near-optical experiments can be done without constraints on spacecraft trajectories.

  7. Nuclear physics for geo-neutrino studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiorentini, Gianni; Istituto Nazionale di Fisica Nucleare, Sezione di Ferrara, I-44100 Ferrara; Ianni, Aldo

    2010-03-15

    Geo-neutrino studies are based on theoretical estimates of geo-neutrino spectra. We propose a method for a direct measurement of the energy distribution of antineutrinos from decays of long-lived radioactive isotopes. We present preliminary results for the geo-neutrinos from {sup 214}Bi decay, a process that accounts for about one-half of the total geo-neutrino signal. The feeding probability of the lowest state of {sup 214}Bi--the most important for geo-neutrino signal--is found to be p{sub 0}=0.177+-0.004 (stat){sub -0.001}{sup +0.003} (sys), under the hypothesis of universal neutrino spectrum shape (UNSS). This value is consistent with the (indirect) estimate of the table of isotopes. Wemore » show that achievable larger statistics and reduction of systematics should allow for the testing of possible distortions of the neutrino spectrum from that predicted using the UNSS hypothesis. Implications on the geo-neutrino signal are discussed.« less

  8. Method and apparatus for enhanced detection of toxic agents

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Wu, Jie Jayne; Qi, Hairong

    2013-10-01

    A biosensor based detection of toxins includes enhancing a fluorescence signal by concentrating a plurality of photosynthetic organisms in a fluid into a concentrated region using biased AC electro-osmosis. A measured photosynthetic activity of the photosynthetic organisms is obtained in the concentrated region, where chemical, biological or radiological agents reduce a nominal photosynthetic activity of the photosynthetic organisms. A presence of the chemical, biological and/or radiological agents or precursors thereof, is determined in the fluid based on the measured photosynthetic activity of the concentrated plurality of photosynthetic organisms. A lab-on-a-chip system is used for the concentrating step. The presence of agents is determined from feature vectors, obtained from processing a time dependent signal using amplitude statistics and/or time-frequency analysis, relative to a control signal. A linear discriminant method including support vector machine classification (SVM) is used to identify the agents.

  9. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  10. Processing Functional Near Infrared Spectroscopy Signal with a Kalman Filter to Assess Working Memory during Simulated Flight.

    PubMed

    Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric

    2015-01-01

    Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased the performance of the classification of WM levels based on brain signal. The results suggest that Kalman filter is a suitable approach for real-time improvement of near infrared spectroscopy signal in ecological situations and the development of BCI.

  11. Processing Functional Near Infrared Spectroscopy Signal with a Kalman Filter to Assess Working Memory during Simulated Flight

    PubMed Central

    Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric

    2016-01-01

    Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased the performance of the classification of WM levels based on brain signal. The results suggest that Kalman filter is a suitable approach for real-time improvement of near infrared spectroscopy signal in ecological situations and the development of BCI. PMID:26834607

  12. A New Statistics-Based Online Baseline Restorer for a High Count-Rate Fully Digital System.

    PubMed

    Li, Hongdi; Wang, Chao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Liu, Shitao; An, Shaohui; Wong, Wai-Hoi

    2010-04-01

    The goal of this work is to develop a novel, accurate, real-time digital baseline restorer using online statistical processing for a high count-rate digital system such as positron emission tomography (PET). In high count-rate nuclear instrumentation applications, analog signals are DC-coupled for better performance. However, the detectors, pre-amplifiers and other front-end electronics would cause a signal baseline drift in a DC-coupling system, which will degrade the performance of energy resolution and positioning accuracy. Event pileups normally exist in a high-count rate system and the baseline drift will create errors in the event pileup-correction. Hence, a baseline restorer (BLR) is required in a high count-rate system to remove the DC drift ahead of the pileup correction. Many methods have been reported for BLR from classic analog methods to digital filter solutions. However a single channel BLR with analog method can only work under 500 kcps count-rate, and normally an analog front-end application-specific integrated circuits (ASIC) is required for the application involved hundreds BLR such as a PET camera. We have developed a simple statistics-based online baseline restorer (SOBLR) for a high count-rate fully digital system. In this method, we acquire additional samples, excluding the real gamma pulses, from the existing free-running ADC in the digital system, and perform online statistical processing to generate a baseline value. This baseline value will be subtracted from the digitized waveform to retrieve its original pulse with zero-baseline drift. This method can self-track the baseline without a micro-controller involved. The circuit consists of two digital counter/timers, one comparator, one register and one subtraction unit. Simulation shows a single channel works at 30 Mcps count-rate with pileup condition. 336 baseline restorer circuits have been implemented into 12 field-programmable-gate-arrays (FPGA) for our new fully digital PET system.

  13. A semiempirical linear model of indirect, flat-panel x-ray detectors.

    PubMed

    Huang, Shih-Ying; Yang, Kai; Abbey, Craig K; Boone, John M

    2012-04-01

    It is important to understand signal and noise transfer in the indirect, flat-panel x-ray detector when developing and optimizing imaging systems. For optimization where simulating images is necessary, this study introduces a semiempirical model to simulate projection images with user-defined x-ray fluence interaction. The signal and noise transfer in the indirect, flat-panel x-ray detectors is characterized by statistics consistent with energy-integration of x-ray photons. For an incident x-ray spectrum, x-ray photons are attenuated and absorbed in the x-ray scintillator to produce light photons, which are coupled to photodiodes for signal readout. The signal mean and variance are linearly related to the energy-integrated x-ray spectrum by empirically determined factors. With the known first- and second-order statistics, images can be simulated by incorporating multipixel signal statistics and the modulation transfer function of the imaging system. To estimate the semiempirical input to this model, 500 projection images (using an indirect, flat-panel x-ray detector in the breast CT system) were acquired with 50-100 kilovolt (kV) x-ray spectra filtered with 0.1-mm tin (Sn), 0.2-mm copper (Cu), 1.5-mm aluminum (Al), or 0.05-mm silver (Ag). The signal mean and variance of each detector element and the noise power spectra (NPS) were calculated and incorporated into this model for accuracy. Additionally, the modulation transfer function of the detector system was physically measured and incorporated in the image simulation steps. For validation purposes, simulated and measured projection images of air scans were compared using 40 kV∕0.1-mm Sn, 65 kV∕0.2-mm Cu, 85 kV∕1.5-mm Al, and 95 kV∕0.05-mm Ag. The linear relationship between the measured signal statistics and the energy-integrated x-ray spectrum was confirmed and incorporated into the model. The signal mean and variance factors were linearly related to kV for each filter material (r(2) of signal mean to kV: 0.91, 0.93, 0.86, and 0.99 for 0.1-mm Sn, 0.2-mm Cu, 1.5-mm Al, and 0.05-mm Ag, respectively; r(2) of signal variance to kV: 0.99 for all four filters). The comparison of the signal and noise (mean, variance, and NPS) between the simulated and measured air scan images suggested that this model was reasonable in predicting accurate signal statistics of air scan images using absolute percent error. Overall, the model was found to be accurate in estimating signal statistics and spatial correlation between the detector elements of the images acquired with indirect, flat-panel x-ray detectors. The semiempirical linear model of the indirect, flat-panel x-ray detectors was described and validated with images of air scans. The model was found to be a useful tool in understanding the signal and noise transfer within indirect, flat-panel x-ray detector systems.

  14. An algorithm for the estimation of the signal-to-noise ratio in surface myoelectric signals generated during cyclic movements.

    PubMed

    Agostini, Valentina; Knaflitz, Marco

    2012-01-01

    In many applications requiring the study of the surface myoelectric signal (SMES) acquired in dynamic conditions, it is essential to have a quantitative evaluation of the quality of the collected signals. When the activation pattern of a muscle has to be obtained by means of single- or double-threshold statistical detectors, the background noise level e (noise) of the signal is a necessary input parameter. Moreover, the detection strategy of double-threshold detectors may be properly tuned when the SNR and the duty cycle (DC) of the signal are known. The aim of this paper is to present an algorithm for the estimation of e (noise), SNR, and DC of an SMES collected during cyclic movements. The algorithm is validated on synthetic signals with statistical properties similar to those of SMES, as well as on more than 100 real signals. © 2011 IEEE

  15. Thermodynamics of statistical inference by cells.

    PubMed

    Lang, Alex H; Fisher, Charles K; Mora, Thierry; Mehta, Pankaj

    2014-10-03

    The deep connection between thermodynamics, computation, and information is now well established both theoretically and experimentally. Here, we extend these ideas to show that thermodynamics also places fundamental constraints on statistical estimation and learning. To do so, we investigate the constraints placed by (nonequilibrium) thermodynamics on the ability of biochemical signaling networks to estimate the concentration of an external signal. We show that accuracy is limited by energy consumption, suggesting that there are fundamental thermodynamic constraints on statistical inference.

  16. The Performance of a PN Spread Spectrum Receiver Preceded by an Adaptive Interference Suppression Filter.

    DTIC Science & Technology

    1982-12-01

    Sequence dj Estimate of the Desired Signal DEL Sampling Time Interval DS Direct Sequence c Sufficient Statistic E/T Signal Power Erfc Complimentary Error...Namely, a white Gaussian noise (WGN) generator was added. Also, a statistical subroutine was added in order to assess performance improvement at the...reference code and then passed through a correlation detector whose output is the sufficient 1 statistic , e . Using a threshold device and the sufficient

  17. The Increased Sensitivity of Irregular Peripheral Canal and Otolith Vestibular Afferents Optimizes their Encoding of Natural Stimuli

    PubMed Central

    Schneider, Adam D.; Jamali, Mohsen; Carriot, Jerome; Chacron, Maurice J.

    2015-01-01

    Efficient processing of incoming sensory input is essential for an organism's survival. A growing body of evidence suggests that sensory systems have developed coding strategies that are constrained by the statistics of the natural environment. Consequently, it is necessary to first characterize neural responses to natural stimuli to uncover the coding strategies used by a given sensory system. Here we report for the first time the statistics of vestibular rotational and translational stimuli experienced by rhesus monkeys during natural (e.g., walking, grooming) behaviors. We find that these stimuli can reach intensities as high as 1500 deg/s and 8 G. Recordings from afferents during naturalistic rotational and linear motion further revealed strongly nonlinear responses in the form of rectification and saturation, which could not be accurately predicted by traditional linear models of vestibular processing. Accordingly, we used linear–nonlinear cascade models and found that these could accurately predict responses to naturalistic stimuli. Finally, we tested whether the statistics of natural vestibular signals constrain the neural coding strategies used by peripheral afferents. We found that both irregular otolith and semicircular canal afferents, because of their higher sensitivities, were more optimized for processing natural vestibular stimuli as compared with their regular counterparts. Our results therefore provide the first evidence supporting the hypothesis that the neural coding strategies used by the vestibular system are matched to the statistics of natural stimuli. PMID:25855169

  18. Optimization method of superpixel analysis for multi-contrast Jones matrix tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki

    2017-02-01

    Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.

  19. Multivariate statistical analysis strategy for multiple misfire detection in internal combustion engines

    NASA Astrophysics Data System (ADS)

    Hu, Chongqing; Li, Aihua; Zhao, Xingyang

    2011-02-01

    This paper proposes a multivariate statistical analysis approach to processing the instantaneous engine speed signal for the purpose of locating multiple misfire events in internal combustion engines. The state of each cylinder is described with a characteristic vector extracted from the instantaneous engine speed signal following a three-step procedure. These characteristic vectors are considered as the values of various procedure parameters of an engine cycle. Therefore, determination of occurrence of misfire events and identification of misfiring cylinders can be accomplished by a principal component analysis (PCA) based pattern recognition methodology. The proposed algorithm can be implemented easily in practice because the threshold can be defined adaptively without the information of operating conditions. Besides, the effect of torsional vibration on the engine speed waveform is interpreted as the presence of super powerful cylinder, which is also isolated by the algorithm. The misfiring cylinder and the super powerful cylinder are often adjacent in the firing sequence, thus missing detections and false alarms can be avoided effectively by checking the relationship between the cylinders.

  20. Statistical issues in signal extraction from microarrays

    NASA Astrophysics Data System (ADS)

    Bergemann, Tracy; Quiaoit, Filemon; Delrow, Jeffrey J.; Zhao, Lue Ping

    2001-06-01

    Microarray technologies are increasingly used in biomedical research to study genome-wide expression profiles in the post genomic era. Their popularity is largely due to their high throughput and economical affordability. For example, microarrays have been applied to studies of cell cycle, regulatory circuitry, cancer cell lines, tumor tissues, and drug discoveries. One obstacle facing the continued success of applying microarray technologies, however, is the random variaton present on microarrays: within signal spots, between spots and among chips. In addition, signals extracted by available software packages seem to vary significantly. Despite a variety of software packages, it appears that there are two major approaches to signal extraction. One approach is to focus on the identification of signal regions and hence estimation of signal levels above background levels. The other approach is to use the distribution of intensity values as a way of identifying relevant signals. Building upon both approaches, the objective of our work is to develop a method that is statistically rigorous and also efficient and robust. Statistical issues to be considered here include: (1) how to refine grid alignment so that the overall variation is minimized, (2) how to estimate the signal levels relative to the local background levels as well as the variance of this estimate, and (3) how to integrate red and green channel signals so that the ratio of interest is stable, simultaneously relaxing distributional assumptions.

  1. Audience Diversion Due to Cable Television: A Statistical Analysis of New Data.

    ERIC Educational Resources Information Center

    Park, Rolla Edward

    A statistical analysis of new data suggests that television broadcasting will continue to prosper, despite increasing competition from cable television carrying distant signals. Data on cable and non-cable audiences in 121 counties with well defined signal choice support generalized least squares estimates of two models: total audience and…

  2. Testing for nonlinearity in time series: The method of surrogate data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theiler, J.; Galdrikian, B.; Longtin, A.

    1991-01-01

    We describe a statistical approach for identifying nonlinearity in time series; in particular, we want to avoid claims of chaos when simpler models (such as linearly correlated noise) can explain the data. The method requires a careful statement of the null hypothesis which characterizes a candidate linear process, the generation of an ensemble of surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against themore » null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. We present algorithms for generating surrogate data under various null hypotheses, and we show the results of numerical experiments on artificial data using correlation dimension, Lyapunov exponent, and forecasting error as discriminating statistics. Finally, we consider a number of experimental time series -- including sunspots, electroencephalogram (EEG) signals, and fluid convection -- and evaluate the statistical significance of the evidence for nonlinear structure in each case. 56 refs., 8 figs.« less

  3. Nonlinear digital signal processing in mental health: characterization of major depression using instantaneous entropy measures of heartbeat dynamics.

    PubMed

    Valenza, Gaetano; Garcia, Ronald G; Citi, Luca; Scilingo, Enzo P; Tomaz, Carlos A; Barbieri, Riccardo

    2015-01-01

    Nonlinear digital signal processing methods that address system complexity have provided useful computational tools for helping in the diagnosis and treatment of a wide range of pathologies. More specifically, nonlinear measures have been successful in characterizing patients with mental disorders such as Major Depression (MD). In this study, we propose the use of instantaneous measures of entropy, namely the inhomogeneous point-process approximate entropy (ipApEn) and the inhomogeneous point-process sample entropy (ipSampEn), to describe a novel characterization of MD patients undergoing affective elicitation. Because these measures are built within a nonlinear point-process model, they allow for the assessment of complexity in cardiovascular dynamics at each moment in time. Heartbeat dynamics were characterized from 48 healthy controls and 48 patients with MD while emotionally elicited through either neutral or arousing audiovisual stimuli. Experimental results coming from the arousing tasks show that ipApEn measures are able to instantaneously track heartbeat complexity as well as discern between healthy subjects and MD patients. Conversely, standard heart rate variability (HRV) analysis performed in both time and frequency domains did not show any statistical significance. We conclude that measures of entropy based on nonlinear point-process models might contribute to devising useful computational tools for care in mental health.

  4. Process and system - A dual definition, revisited with consequences in metrology

    NASA Astrophysics Data System (ADS)

    Ruhm, K. H.

    2010-07-01

    Lets assert that metrology life could be easier scientifically as well as technologically, if we, intentionally, would make an explicit distinction between two outstanding domains, namely the given, really existent domain of processes and the just virtually existent domain of systems, the latter of which is designed and used by the human mind. The abstract domain of models, by which we map the manifold reality of processes, is itself part of the domain of systems. Models support comprehension and communication, although they are normally extreme simplifications of properties and behaviour of a concrete reality. So, systems and signals represent processes and quantities, which are described by means of Signal and System Theory as well as by Stochastics and Statistics. The following presentation of this new, demanding and somehow irritating definition of the terms process and system as a dual pair is unusual indeed, but it opens the door widely to a better and more consistent discussion and understanding of manifold scientific tools in many areas. Metrology [4] is one of the important fields of concern due to many reasons: One group of the soft and hard links between the domain of processes and the domain of systems is realised by concepts of measurement science on the one hand and by instrumental tools of measurement technology on the other hand.

  5. The Juvenile Hafnium Isotope Signal as a Record of Supercontinent Cycles

    PubMed Central

    Gardiner, Nicholas J.; Kirkland, Christopher L.; Van Kranendonk, Martin J.

    2016-01-01

    Hf isotope ratios measured in igneous zircon are controlled by magmatic source, which may be linked to tectonic setting. Over the 200–500 Myr periodicity of the supercontinent cycle - the principal geological phenomenon controlling prevailing global tectonic style - juvenile Hf signals, i.e. most radiogenic, are typically measured in zircon from granites formed in arc settings (crustal growth), and evolved zircon Hf signals in granites formed in continent-collision settings (crustal reworking). Interrogations of Hf datasets for excursions related to Earth events commonly use the median value, however this may be equivocal due to magma mixing. The most juvenile part of the Hf signal is less influenced by crustal in-mixing, and arguably a more sensitive archive of Earth’s geodynamic state. We analyze the global Hf dataset for this juvenile signal, statistically correlating supercontinent amalgamation intervals with evolved Hf episodes, and breakup leading to re-assembly with juvenile Hf episodes. The juvenile Hf signal is more sensitive to Pangaea and Rodinia assembly, its amplitude increasing with successive cycles to a maximum with Gondwana assembly which may reflect enhanced subduction-erosion. We demonstrate that the juvenile Hf signal carries important information on prevailing global magmatic style, and thus tectonic processes. PMID:27924946

  6. Relative Wave Energy based Adaptive Neuro-Fuzzy Inference System model for the Estimation of Depth of Anaesthesia.

    PubMed

    Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P

    2018-01-01

    The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.

  7. Stochastic Resonance in Signal Detection and Human Perception

    DTIC Science & Technology

    2006-07-05

    learning scheme performing a stochastic gradient ascent on the SNR to determine the optimal noise level based on the samples from the process. Rather than...produce some SR effect in threshold neurons and a new statistically robust learning law was proposed to find the optimal noise level. [McDonnell...Ultimately, we know that it is the brain that responds to a visual stimulus causing neurons to fire. Conceivably if we understood the effect of the noise PDF

  8. Adaptive Locally Optimum Processing for Interference Suppression from Communication and Undersea Surveillance Signals

    DTIC Science & Technology

    1994-07-01

    1993. "Analysis of the 1730-1732. Track - Before - Detect Approach to Target Detection using Pixel Statistics", to appear in IEEE Transactions Scholz, J...large surveillance arrays. One approach to combining energy in different spatial cells is track - before - detect . References to examples appear in the next... track - before - detect problem. The results obtained are not expected to depend strongly on model details. In particular, the structure of the tracking

  9. Coupled Research in Ocean Acoustics and Signal Processing for the Next Generation of Underwater Acoustic Communication Systems

    DTIC Science & Technology

    2016-08-05

    technique which used unobserved ”intermediate” variables to break a high-dimensional estimation problem such as least- squares (LS) optimization of a large...Least Squares (GEM-LS). The estimator is iterative and the work in this time period focused on characterizing the convergence properties of this...ap- proach by relaxing the statistical assumptions which is termed the Relaxed Approximate Graph-Structured Recursive Least Squares (RAGS-RLS). This

  10. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    NASA Technical Reports Server (NTRS)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; hide

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  11. Computational Dysfunctions in Anxiety: Failure to Differentiate Signal From Noise.

    PubMed

    Huang, He; Thompson, Wesley; Paulus, Martin P

    2017-09-15

    Differentiating whether an action leads to an outcome by chance or by an underlying statistical regularity that signals environmental change profoundly affects adaptive behavior. Previous studies have shown that anxious individuals may not appropriately differentiate between these situations. This investigation aims to precisely quantify the process deficit in anxious individuals and determine the degree to which these process dysfunctions are specific to anxiety. One hundred twenty-two subjects recruited as part of an ongoing large clinical population study completed a change point detection task. Reinforcement learning models were used to explicate observed behavioral differences in low anxiety (Overall Anxiety Severity and Impairment Scale score ≤ 8) and high anxiety (Overall Anxiety Severity and Impairment Scale score ≥ 9) groups. High anxiety individuals used a suboptimal decision strategy characterized by a higher lose-shift rate. Computational models and simulations revealed that this difference was related to a higher base learning rate. These findings are better explained in a context-dependent reinforcement learning model. Anxious subjects' exaggerated response to uncertainty leads to a suboptimal decision strategy that makes it difficult for these individuals to determine whether an action is associated with an outcome by chance or by some statistical regularity. These findings have important implications for developing new behavioral intervention strategies using learning models. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  12. A comprehensive statistical investigation of schlieren image velocimetry (SIV) using high-velocity helium jet

    NASA Astrophysics Data System (ADS)

    Biswas, Sayan; Qiao, Li

    2017-03-01

    A detailed statistical assessment of seedless velocity measurement using Schlieren Image Velocimetry (SIV) was explored using open source Robust Phase Correlation (RPC) algorithm. A well-known flow field, an axisymmetric turbulent helium jet, was analyzed near and intermediate region (0≤ x/d≤ 20) for two different Reynolds numbers, Re d = 11,000 and Re d = 22,000 using schlieren with horizontal knife-edge, schlieren with vertical knife-edge and shadowgraph technique, and the resulted velocity fields from SIV techniques were compared to traditional Particle Image Velocimetry (PIV) measurements. A novel, inexpensive, easy to setup two-camera SIV technique had been demonstrated to measure high-velocity turbulent jet, with jet exit velocities 304 m/s (Mach = 0.3) and 611 m/s (Mach = 0.6), respectively. Several image restoration and enhancement techniques were tested to improve signal to noise ratio (SNR) in schlieren and shadowgraph images. Processing and post-processing parameters for SIV techniques were examined in detail. A quantitative comparison between self-seeded SIV techniques and traditional PIV had been made using correlation statistics. While the resulted flow field from schlieren with horizontal knife-edge and shadowgraph showed excellent agreement with PIV measurements, schlieren with vertical knife-edge performed poorly. The performance of spatial cross-correlations at different jet locations using SIV techniques and PIV was evaluated. Turbulence quantities like turbulence intensity, mean velocity fields, Reynolds shear stress influenced spatial correlations and correlation plane SNR heavily. Several performance metrics such as primary peak ratio (PPR), peak to correlation energy (PCE), the probability distribution of signal and noise were used to compare capability and potential of different SIV techniques.

  13. Evidence for speckle effects on pulsed CO2 lidar signal returns from remote targets

    NASA Technical Reports Server (NTRS)

    Menzies, R. T.; Kavaya, M. J.; Flamant, P. H.

    1984-01-01

    A pulsed CO2 lidar was used to study statistical properties of signal returns from various rough surfaces at distances near 2 km. These included natural in situ topographic materials as well as man-made hard targets. Three lidar configurations were used: heterodyne detection with single temporal mode transmitter pulses, and direct detection with single and multiple temporal mode pulses. The significant differences in signal return statistics, due largely to speckle effects, are discussed.

  14. High-Density Signal Interface Electromagnetic Radiation Prediction for Electromagnetic Compatibility Evaluation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halligan, Matthew

    Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less

  15. Phase slips in oscillatory hair bundles.

    PubMed

    Roongthumskul, Yuttana; Shlomovitz, Roie; Bruinsma, Robijn; Bozovic, Dolores

    2013-04-05

    Hair cells of the inner ear contain an active amplifier that allows them to detect extremely weak signals. As one of the manifestations of an active process, spontaneous oscillations arise in fluid immersed hair bundles of in vitro preparations of selected auditory and vestibular organs. We measure the phase-locking dynamics of oscillatory bundles exposed to low-amplitude sinusoidal signals, a transition that can be described by a saddle-node bifurcation on an invariant circle. The transition is characterized by the occurrence of phase slips, at a rate that is dependent on the amplitude and detuning of the applied drive. The resultant staircase structure in the phase of the oscillation can be described by the stochastic Adler equation, which reproduces the statistics of phase slip production.

  16. Real Time Data Acquisition and Online Signal Processing for Magnetoencephalography

    NASA Astrophysics Data System (ADS)

    Rongen, H.; Hadamschek, V.; Schiek, M.

    2006-06-01

    To establish improved therapies for patients suffering from severe neurological and psychiatric diseases, a demand controlled and desynchronizing brain-pacemaker has been developed with techniques from statistical physics and nonlinear dynamics. To optimize the novel therapeutic approach, brain activity is investigated with a Magnetoencephalography (MEG) system prior to surgery. For this, a real time data acquisition system for a 148 channel MEG and online signal processing for artifact rejection, filtering, cross trial phase resetting analysis and three-dimensional (3-D) reconstruction of the cerebral current sources was developed. The developed PCI bus hardware is based on a FPGA and DSP design, using the benefits from both architectures. The reconstruction and visualization of the 3-D volume data is done by the PC which hosts the real time DAQ and pre-processing board. The framework of the MEG-online system is introduced and the architecture of the real time DAQ board and online reconstruction is described. In addition we show first results with the MEG-Online system for the investigation of dynamic brain activities in relation to external visual stimulation, based on test data sets.

  17. Information Theory Filters for Wavelet Packet Coefficient Selection with Application to Corrosion Type Identification from Acoustic Emission Signals

    PubMed Central

    Van Dijck, Gert; Van Hulle, Marc M.

    2011-01-01

    The damage caused by corrosion in chemical process installations can lead to unexpected plant shutdowns and the leakage of potentially toxic chemicals into the environment. When subjected to corrosion, structural changes in the material occur, leading to energy releases as acoustic waves. This acoustic activity can in turn be used for corrosion monitoring, and even for predicting the type of corrosion. Here we apply wavelet packet decomposition to extract features from acoustic emission signals. We then use the extracted wavelet packet coefficients for distinguishing between the most important types of corrosion processes in the chemical process industry: uniform corrosion, pitting and stress corrosion cracking. The local discriminant basis selection algorithm can be considered as a standard for the selection of the most discriminative wavelet coefficients. However, it does not take the statistical dependencies between wavelet coefficients into account. We show that, when these dependencies are ignored, a lower accuracy is obtained in predicting the corrosion type. We compare several mutual information filters to take these dependencies into account in order to arrive at a more accurate prediction. PMID:22163921

  18. Gravitational wave searches with pulsar timing arrays: Cancellation of clock and ephemeris noises

    NASA Astrophysics Data System (ADS)

    Tinto, Massimo

    2018-04-01

    We propose a data processing technique to cancel monopole and dipole noise sources (such as clock and ephemeris noises, respectively) in pulsar timing array searches for gravitational radiation. These noises are the dominant sources of correlated timing fluctuations in the lower-part (≈10-9-10-8 Hz ) of the gravitational wave band accessible by pulsar timing experiments. After deriving the expressions that reconstruct these noises from the timing data, we estimate the gravitational wave sensitivity of our proposed processing technique to single-source signals to be at least one order of magnitude higher than that achievable by directly processing the timing data from an equal-size array. Since arrays can generate pairs of clock and ephemeris-free timing combinations that are no longer affected by correlated noises, we implement with them the cross-correlation statistic to search for an isotropic stochastic gravitational wave background. We find the resulting optimal signal-to-noise ratio to be more than one order of magnitude larger than that obtainable by correlating pairs of timing data from arrays of equal size.

  19. Distribution and Variability of Satellite-Derived Signals of Isolated Convection Initiation Events Over Central Eastern China

    NASA Astrophysics Data System (ADS)

    Huang, Yipeng; Meng, Zhiyong; Li, Jing; Li, Wanbiao; Bai, Lanqiang; Zhang, Murong; Wang, Xi

    2017-11-01

    This study combined measurements from the Chinese operational geostationary satellite Fengyun-2E (FY-2E) and ground-based weather radars to conduct a statistical survey of isolated convection initiation (CI) over central eastern China (CEC). The convective environment in CEC is modulated by the complex topography and monsoon climate. From May to August 2010, a total of 1,630 isolated CI signals were derived from FY-2E using a semiautomated method. The formation of these satellite-derived CI signals peaks in the early afternoon and occurs with high frequency in areas with remarkable terrain inhomogeneity (e.g., mountain, water, and mountain-water areas). The high signal frequency areas shift from northwest CEC (dry, high altitude) in early summer to southeast CEC (humid, low altitude) in midsummer along with an increasing monthly mean frequency. The satellite-derived CI signals tend to have longer lead times (the time difference between satellite-derived signal formation and radar-based CI) in the late morning and afternoon than in the early morning and night. During the early morning and night, the distinction between cloud top signatures and background terrestrial radiation becomes less apparent, resulting in delayed identification of the signals and thus short and even negative lead times. A decline in the lead time is observed from May to August, likely due to the increasing cloud growth rate and warm-rain processes. Results show increasing lead times with increasing landscape elevation, likely due to more warm-rain processes over the coastal sea and plain, along with a decreasing cloud growth rate from hill and mountain to the plateau.

  20. Signal processing for smart cards

    NASA Astrophysics Data System (ADS)

    Quisquater, Jean-Jacques; Samyde, David

    2003-06-01

    In 1998, Paul Kocher showed that when a smart card computes cryptographic algorithms, for signatures or encryption, its consumption or its radiations leak information. The keys or the secrets hidden in the card can then be recovered using a differential measurement based on the intercorrelation function. A lot of silicon manufacturers use desynchronization countermeasures to defeat power analysis. In this article we detail a new resynchronization technic. This method can be used to facilitate the use of a neural network to do the code recognition. It becomes possible to reverse engineer a software code automatically. Using data and clock separation methods, we show how to optimize the synchronization using signal processing. Then we compare these methods with watermarking methods for 1D and 2D signal. The very last watermarking detection improvements can be applied to signal processing for smart cards with very few modifications. Bayesian processing is one of the best ways to do Differential Power Analysis, and it is possible to extract a PIN code from a smart card in very few samples. So this article shows the need to continue to set up effective countermeasures for cryptographic processors. Although the idea to use advanced signal processing operators has been commonly known for a long time, no publication explains that results can be obtained. The main idea of differential measurement is to use the cross-correlation of two random variables and to repeat consumption measurements on the processor to be analyzed. We use two processors clocked at the same external frequency and computing the same data. The applications of our design are numerous. Two measurements provide the inputs of a central operator. With the most accurate operator we can improve the signal noise ratio, re-synchronize the acquisition clock with the internal one, or remove jitter. The analysis based on consumption or electromagnetic measurements can be improved using our structure. At first sight the same results can be obtained with only one smart card, but this idea is not completely true because the statistical properties of the signal are not the same. As the two smart cards are submitted to the same external noise during the measurement, it is more easy to reduce the influence of perturbations. This paper shows the importance of accurate countermeasures against differential analysis.

  1. Cross-Approximate Entropy parallel computation on GPUs for biomedical signal analysis. Application to MEG recordings.

    PubMed

    Martínez-Zarzuela, Mario; Gómez, Carlos; Díaz-Pernas, Francisco Javier; Fernández, Alberto; Hornero, Roberto

    2013-10-01

    Cross-Approximate Entropy (Cross-ApEn) is a useful measure to quantify the statistical dissimilarity of two time series. In spite of the advantage of Cross-ApEn over its one-dimensional counterpart (Approximate Entropy), only a few studies have applied it to biomedical signals, mainly due to its high computational cost. In this paper, we propose a fast GPU-based implementation of the Cross-ApEn that makes feasible its use over a large amount of multidimensional data. The scheme followed is fully scalable, thus maximizes the use of the GPU despite of the number of neural signals being processed. The approach consists in processing many trials or epochs simultaneously, with independence of its origin. In the case of MEG data, these trials can proceed from different input channels or subjects. The proposed implementation achieves an average speedup greater than 250× against a CPU parallel version running on a processor containing six cores. A dataset of 30 subjects containing 148 MEG channels (49 epochs of 1024 samples per channel) can be analyzed using our development in about 30min. The same processing takes 5 days on six cores and 15 days when running on a single core. The speedup is much larger if compared to a basic sequential Matlab(®) implementation, that would need 58 days per subject. To our knowledge, this is the first contribution of Cross-ApEn measure computation using GPUs. This study demonstrates that this hardware is, to the day, the best option for the signal processing of biomedical data with Cross-ApEn. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Reconstructing surface wave profiles from reflected acoustic pulses using multiple receivers.

    PubMed

    Walstead, Sean P; Deane, Grant B

    2014-08-01

    Surface wave shapes are determined by analyzing underwater reflected acoustic signals collected at multiple receivers. The transmitted signals are of nominal frequency 300 kHz and are reflected off surface gravity waves that are paddle-generated in a wave tank. An inverse processing algorithm reconstructs 50 surface wave shapes over a length span of 2.10 m. The inverse scheme uses a broadband forward scattering model based on Kirchhoff's diffraction formula to determine wave shapes. The surface reconstruction algorithm is self-starting in that source and receiver geometry and initial estimates of wave shape are determined from the same acoustic signals used in the inverse processing. A high speed camera provides ground-truth measurements of the surface wave field for comparison with the acoustically derived surface waves. Within Fresnel zone regions the statistical confidence of the inversely optimized surface profile exceeds that of the camera profile. Reconstructed surfaces are accurate to a resolution of about a quarter-wavelength of the acoustic pulse only within Fresnel zones associated with each source and receiver pair. Multiple isolated Fresnel zones from multiple receivers extend the spatial extent of accurate surface reconstruction while overlapping Fresnel zones increase confidence in the optimized profiles there.

  3. The Novel Nonlinear Adaptive Doppler Shift Estimation Technique and the Coherent Doppler Lidar System Validation Lidar

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.

    2006-01-01

    The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.

  4. Noise Maps for Quantitative and Clinical Severity Towards Long-Term ECG Monitoring.

    PubMed

    Everss-Villalba, Estrella; Melgarejo-Meseguer, Francisco Manuel; Blanco-Velasco, Manuel; Gimeno-Blanes, Francisco Javier; Sala-Pla, Salvador; Rojo-Álvarez, José Luis; García-Alberola, Arcadi

    2017-10-25

    Noise and artifacts are inherent contaminating components and are particularly present in Holter electrocardiogram (ECG) monitoring. The presence of noise is even more significant in long-term monitoring (LTM) recordings, as these are collected for several days in patients following their daily activities; hence, strong artifact components can temporarily impair the clinical measurements from the LTM recordings. Traditionally, the noise presence has been dealt with as a problem of non-desirable component removal by means of several quantitative signal metrics such as the signal-to-noise ratio (SNR), but current systems do not provide any information about the true impact of noise on the ECG clinical evaluation. As a first step towards an alternative to classical approaches, this work assesses the ECG quality under the assumption that an ECG has good quality when it is clinically interpretable. Therefore, our hypotheses are that it is possible (a) to create a clinical severity score for the effect of the noise on the ECG, (b) to characterize its consistency in terms of its temporal and statistical distribution, and (c) to use it for signal quality evaluation in LTM scenarios. For this purpose, a database of external event recorder (EER) signals is assembled and labeled from a clinical point of view for its use as the gold standard of noise severity categorization. These devices are assumed to capture those signal segments more prone to be corrupted with noise during long-term periods. Then, the ECG noise is characterized through the comparison of these clinical severity criteria with conventional quantitative metrics taken from traditional noise-removal approaches, and noise maps are proposed as a novel representation tool to achieve this comparison. Our results showed that neither of the benchmarked quantitative noise measurement criteria represent an accurate enough estimation of the clinical severity of the noise. A case study of long-term ECG is reported, showing the statistical and temporal correspondences and properties with respect to EER signals used to create the gold standard for clinical noise. The proposed noise maps, together with the statistical consistency of the characterization of the noise clinical severity, paves the way towards forthcoming systems providing us with noise maps of the noise clinical severity, allowing the user to process different ECG segments with different techniques and in terms of different measured clinical parameters.

  5. Why the Brain Knows More than We Do: Non-Conscious Representations and Their Role in the Construction of Conscious Experience

    PubMed Central

    Dresp-Langley, Birgitta

    2011-01-01

    Scientific studies have shown that non-conscious stimuli and representations influence information processing during conscious experience. In the light of such evidence, questions about potential functional links between non-conscious brain representations and conscious experience arise. This article discusses neural model capable of explaining how statistical learning mechanisms in dedicated resonant circuits could generate specific temporal activity traces of non-conscious representations in the brain. How reentrant signaling, top-down matching, and statistical coincidence of such activity traces may lead to the progressive consolidation of temporal patterns that constitute the neural signatures of conscious experience in networks extending across large distances beyond functionally specialized brain regions is then explained. PMID:24962683

  6. Automatic identification of bullet signatures based on consecutive matching striae (CMS) criteria.

    PubMed

    Chu, Wei; Thompson, Robert M; Song, John; Vorburger, Theodore V

    2013-09-10

    The consecutive matching striae (CMS) numeric criteria for firearm and toolmark identifications have been widely accepted by forensic examiners, although there have been questions concerning its observer subjectivity and limited statistical support. In this paper, based on signal processing and extraction, a model for the automatic and objective counting of CMS is proposed. The position and shape information of the striae on the bullet land is represented by a feature profile, which is used for determining the CMS number automatically. Rapid counting of CMS number provides a basis for ballistics correlations with large databases and further statistical and probability analysis. Experimental results in this report using bullets fired from ten consecutively manufactured barrels support this developed model. Published by Elsevier Ireland Ltd.

  7. Theoretical and experimental analysis of laser altimeters for barometric measurements over the ocean

    NASA Technical Reports Server (NTRS)

    Tsai, B. M.; Gardner, C. S.

    1984-01-01

    The statistical characteristics and the waveforms of ocean-reflected laser pulses are studied. The received signal is found to be corrupted by shot noise and time-resolved speckle. The statistics of time-resolved speckle and its effects on the timing accuracy of the receiver are studied in the general context of laser altimetry. For estimating the differential propagation time, various receiver timing algorithms are proposed and their performances evaluated. The results indicate that, with the parameters of a realistic altimeter, a pressure measurement accuracy of a few millibars is feasible. The data obtained from the first airborne two-color laser altimeter experiment are processed and analyzed. The results are used to verify the pressure measurement concept.

  8. Single Trial EEG Patterns for the Prediction of Individual Differences in Fluid Intelligence.

    PubMed

    Qazi, Emad-Ul-Haq; Hussain, Muhammad; Aboalsamh, Hatim; Malik, Aamir Saeed; Amin, Hafeez Ullah; Bamatraf, Saeed

    2016-01-01

    Assessing a person's intelligence level is required in many situations, such as career counseling and clinical applications. EEG evoked potentials in oddball task and fluid intelligence score are correlated because both reflect the cognitive processing and attention. A system for prediction of an individual's fluid intelligence level using single trial Electroencephalography (EEG) signals has been proposed. For this purpose, we employed 2D and 3D contents and 34 subjects each for 2D and 3D, which were divided into low-ability (LA) and high-ability (HA) groups using Raven's Advanced Progressive Matrices (RAPM) test. Using visual oddball cognitive task, neural activity of each group was measured and analyzed over three midline electrodes (Fz, Cz, and Pz). To predict whether an individual belongs to LA or HA group, features were extracted using wavelet decomposition of EEG signals recorded in visual oddball task and support vector machine (SVM) was used as a classifier. Two different types of Haar wavelet transform based features have been extracted from the band (0.3 to 30 Hz) of EEG signals. Statistical wavelet features and wavelet coefficient features from the frequency bands 0.0-1.875 Hz (delta low) and 1.875-3.75 Hz (delta high), resulted in the 100 and 98% prediction accuracies, respectively, both for 2D and 3D contents. The analysis of these frequency bands showed clear difference between LA and HA groups. Further, discriminative values of the features have been validated using statistical significance tests and inter-class and intra-class variation analysis. Also, statistical test showed that there was no effect of 2D and 3D content on the assessment of fluid intelligence level. Comparisons with state-of-the-art techniques showed the superiority of the proposed system.

  9. Gene expression profiling of Japanese psoriatic skin reveals an increased activity in molecular stress and immune response signals.

    PubMed

    Kulski, Jerzy K; Kenworthy, William; Bellgard, Matthew; Taplin, Ross; Okamoto, Koichi; Oka, Akira; Mabuchi, Tomotaka; Ozawa, Akira; Tamiya, Gen; Inoko, Hidetoshi

    2005-12-01

    Gene expression profiling was performed on biopsies of affected and unaffected psoriatic skin and normal skin from seven Japanese patients to obtain insights into the pathways that control this disease. HUG95A Affymetrix DNA chips that contained oligonucleotide arrays of approximately 12,000 well-characterized human genes were used in the study. The statistical analysis of the Affymetrix data, based on the ranking of the Student t-test statistic, revealed a complex regulation of molecular stress and immune gene responses. The majority of the 266 induced genes in affected and unaffected psoriatic skin were involved with interferon mediation, immunity, cell adhesion, cytoskeleton restructuring, protein trafficking and degradation, RNA regulation and degradation, signalling transduction, apoptosis and atypical epidermal cellular proliferation and differentiation. The disturbances in the normal protein degradation equilibrium of skin were reflected by the significant increase in the gene expression of various protease inhibitors and proteinases, including the induced components of the ATP/ubiquitin-dependent non-lysosomal proteolytic pathway that is involved with peptide processing and presentation to T cells. Some of the up-regulated genes, such as TGM1, IVL, FABP5, CSTA and SPRR, are well-known psoriatic markers involved in atypical epidermal cellular organization and differentiation. In the comparison between the affected and unaffected psoriatic skin, the transcription factor JUNB was found at the top of the statistical rankings for the up-regulated genes in affected skin, suggesting that it has an important but as yet undefined role in psoriasis. Our gene expression data and analysis suggest that psoriasis is a chronic interferon- and T-cell-mediated immune disease of the skin where the imbalance in epidermal cellular structure, growth and differentiation arises from the molecular antiviral stress signals initiating inappropriate immune responses.

  10. Relationship between cutoff frequency and accuracy in time-interval photon statistics applied to oscillating signals

    NASA Astrophysics Data System (ADS)

    Rebolledo, M. A.; Martinez-Betorz, J. A.

    1989-04-01

    In this paper the accuracy in the determination of the period of an oscillating signal, when obtained from the photon statistics time-interval probability, is studied as a function of the precision (the inverse of the cutoff frequency of the photon counting system) with which time intervals are measured. The results are obtained by means of an experiment with a square-wave signal, where the Fourier or square-wave transforms of the time-interval probability are measured. It is found that for values of the frequency of the signal near the cutoff frequency the errors in the period are small.

  11. Statistical performance evaluation of ECG transmission using wireless networks.

    PubMed

    Shakhatreh, Walid; Gharaibeh, Khaled; Al-Zaben, Awad

    2013-07-01

    This paper presents simulation of the transmission of biomedical signals (using ECG signal as an example) over wireless networks. Investigation of the effect of channel impairments including SNR, pathloss exponent, path delay and network impairments such as packet loss probability; on the diagnosability of the received ECG signal are presented. The ECG signal is transmitted through a wireless network system composed of two communication protocols; an 802.15.4- ZigBee protocol and an 802.11b protocol. The performance of the transmission is evaluated using higher order statistics parameters such as kurtosis and Negative Entropy in addition to the common techniques such as the PRD, RMS and Cross Correlation.

  12. Optical communication with two-photon coherent states. II - Photoemissive detection and structured receiver performance

    NASA Technical Reports Server (NTRS)

    Shapiro, J. H.; Yuen, H. P.; Machado Mata, J. A.

    1979-01-01

    In a previous paper (1978), the authors developed a method of analyzing the performance of two-photon coherent state (TCS) systems for free-space optical communications. General theorems permitting application of classical point process results to detection and estimation of signals in arbitrary quantum states were derived. The present paper examines the general problem of photoemissive detection statistics. On the basis of the photocounting theory of Kelley and Kleiner (1964) it is shown that for arbitrary pure state illumination, the resulting photocurrent is in general a self-exciting point process. The photocount statistics for first-order coherent fields reduce to those of a special class of Markov birth processes, which the authors term single-mode birth processes. These general results are applied to the structure of TCS radiation, and it is shown that the use of TCS radiation with direct or heterodyne detection results in minimal performance increments over comparable coherent-state systems. However, significant performance advantages are offered by use of TCS radiation with homodyne detection. The abstract quantum descriptions of homodyne and heterodyne detection are derived and a synthesis procedure for obtaining quantum measurements described by arbitrary TCS is given.

  13. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    PubMed

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  14. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Extraction of phase information in daily stock prices

    NASA Astrophysics Data System (ADS)

    Fujiwara, Yoshi; Maekawa, Satoshi

    2000-06-01

    It is known that, in an intermediate time-scale such as days, stock market fluctuations possess several statistical properties that are common to different markets. Namely, logarithmic returns of an asset price have (i) truncated Pareto-Lévy distribution, (ii) vanishing linear correlation, (iii) volatility clustering and its power-law autocorrelation. The fact (ii) is a consequence of nonexistence of arbitragers with simple strategies, but this does not mean statistical independence of market fluctuations. Little attention has been paid to temporal structure of higher-order statistics, although it contains some important information on market dynamics. We applied a signal separation technique, called Independent Component Analysis (ICA), to actual data of daily stock prices in Tokyo and New York Stock Exchange (TSE/NYSE). ICA does a linear transformation of lag vectors from time-series to find independent components by a nonlinear algorithm. We obtained a similar impulse response for these dataset. If it were a Martingale process, it can be shown that impulse response should be a delta-function under a few conditions that could be numerically checked and as was verified by surrogate data. This result would provide information on the market dynamics including speculative bubbles and arbitrating processes. .

  16. Multi-fault clustering and diagnosis of gear system mined by spectrum entropy clustering based on higher order cumulants

    NASA Astrophysics Data System (ADS)

    Shao, Renping; Li, Jing; Hu, Wentao; Dong, Feifei

    2013-02-01

    Higher order cumulants (HOC) is a new kind of modern signal analysis of theory and technology. Spectrum entropy clustering (SEC) is a data mining method of statistics, extracting useful characteristics from a mass of nonlinear and non-stationary data. Following a discussion on the characteristics of HOC theory and SEC method in this paper, the study of signal processing techniques and the unique merits of nonlinear coupling characteristic analysis in processing random and non-stationary signals are introduced. Also, a new clustering analysis and diagnosis method is proposed for detecting multi-damage on gear by introducing the combination of HOC and SEC into the damage-detection and diagnosis of the gear system. The noise is restrained by HOC and by extracting coupling features and separating the characteristic signal at different speeds and frequency bands. Under such circumstances, the weak signal characteristics in the system are emphasized and the characteristic of multi-fault is extracted. Adopting a data-mining method of SEC conducts an analysis and diagnosis at various running states, such as the speed of 300 r/min, 900 r/min, 1200 r/min, and 1500 r/min of the following six signals: no-fault, short crack-fault in tooth root, long crack-fault in tooth root, short crack-fault in pitch circle, long crack-fault in pitch circle, and wear-fault on tooth. Research shows that this combined method of detection and diagnosis can also identify the degree of damage of some faults. On this basis, the virtual instrument of the gear system which detects damage and diagnoses faults is developed by combining with advantages of MATLAB and VC++, employing component object module technology, adopting mixed programming methods, and calling the program transformed from an *.m file under VC++. This software system possesses functions of collecting and introducing vibration signals of gear, analyzing and processing signals, extracting features, visualizing graphics, detecting and diagnosing faults, detecting and monitoring, etc. Finally, the results of testing and verifying show that the developed system can effectively be used to detect and diagnose faults in an actual operating gear transmission system.

  17. Multi-fault clustering and diagnosis of gear system mined by spectrum entropy clustering based on higher order cumulants.

    PubMed

    Shao, Renping; Li, Jing; Hu, Wentao; Dong, Feifei

    2013-02-01

    Higher order cumulants (HOC) is a new kind of modern signal analysis of theory and technology. Spectrum entropy clustering (SEC) is a data mining method of statistics, extracting useful characteristics from a mass of nonlinear and non-stationary data. Following a discussion on the characteristics of HOC theory and SEC method in this paper, the study of signal processing techniques and the unique merits of nonlinear coupling characteristic analysis in processing random and non-stationary signals are introduced. Also, a new clustering analysis and diagnosis method is proposed for detecting multi-damage on gear by introducing the combination of HOC and SEC into the damage-detection and diagnosis of the gear system. The noise is restrained by HOC and by extracting coupling features and separating the characteristic signal at different speeds and frequency bands. Under such circumstances, the weak signal characteristics in the system are emphasized and the characteristic of multi-fault is extracted. Adopting a data-mining method of SEC conducts an analysis and diagnosis at various running states, such as the speed of 300 r/min, 900 r/min, 1200 r/min, and 1500 r/min of the following six signals: no-fault, short crack-fault in tooth root, long crack-fault in tooth root, short crack-fault in pitch circle, long crack-fault in pitch circle, and wear-fault on tooth. Research shows that this combined method of detection and diagnosis can also identify the degree of damage of some faults. On this basis, the virtual instrument of the gear system which detects damage and diagnoses faults is developed by combining with advantages of MATLAB and VC++, employing component object module technology, adopting mixed programming methods, and calling the program transformed from an *.m file under VC++. This software system possesses functions of collecting and introducing vibration signals of gear, analyzing and processing signals, extracting features, visualizing graphics, detecting and diagnosing faults, detecting and monitoring, etc. Finally, the results of testing and verifying show that the developed system can effectively be used to detect and diagnose faults in an actual operating gear transmission system.

  18. EEG source analysis of data from paralysed subjects

    NASA Astrophysics Data System (ADS)

    Carabali, Carmen A.; Willoughby, John O.; Fitzgibbon, Sean P.; Grummett, Tyler; Lewis, Trent; DeLosAngeles, Dylan; Pope, Kenneth J.

    2015-12-01

    One of the limitations of Encephalography (EEG) data is its quality, as it is usually contaminated with electric signal from muscle. This research intends to study results of two EEG source analysis methods applied to scalp recordings taken in paralysis and in normal conditions during the performance of a cognitive task. The aim is to determinate which types of analysis are appropriate for dealing with EEG data containing myogenic components. The data used are the scalp recordings of six subjects in normal conditions and during paralysis while performing different cognitive tasks including the oddball task which is the object of this research. The data were pre-processed by filtering it and correcting artefact, then, epochs of one second long for targets and distractors were extracted. Distributed source analysis was performed in BESA Research 6.0, using its results and information from the literature, 9 ideal locations for source dipoles were identified. The nine dipoles were used to perform discrete source analysis, fitting them to the averaged epochs for obtaining source waveforms. The results were statistically analysed comparing the outcomes before and after the subjects were paralysed. Finally, frequency analysis was performed for better explain the results. The findings were that distributed source analysis could produce confounded results for EEG contaminated with myogenic signals, conversely, statistical analysis of the results from discrete source analysis showed that this method could help for dealing with EEG data contaminated with muscle electrical signal.

  19. Nonlinear analysis of pupillary dynamics.

    PubMed

    Onorati, Francesco; Mainardi, Luca Tommaso; Sirca, Fabiola; Russo, Vincenzo; Barbieri, Riccardo

    2016-02-01

    Pupil size reflects autonomic response to different environmental and behavioral stimuli, and its dynamics have been linked to other autonomic correlates such as cardiac and respiratory rhythms. The aim of this study is to assess the nonlinear characteristics of pupil size of 25 normal subjects who participated in a psychophysiological experimental protocol with four experimental conditions, namely “baseline”, “anger”, “joy”, and “sadness”. Nonlinear measures, such as sample entropy, correlation dimension, and largest Lyapunov exponent, were computed on reconstructed signals of spontaneous fluctuations of pupil dilation. Nonparametric statistical tests were performed on surrogate data to verify that the nonlinear measures are an intrinsic characteristic of the signals. We then developed and applied a piecewise linear regression model to detrended fluctuation analysis (DFA). Two joinpoints and three scaling intervals were identified: slope α0, at slow time scales, represents a persistent nonstationary long-range correlation, whereas α1 and α2, at middle and fast time scales, respectively, represent long-range power-law correlations, similarly to DFA applied to heart rate variability signals. Of the computed complexity measures, α0 showed statistically significant differences among experimental conditions (p<0.001). Our results suggest that (a) pupil size at constant light condition is characterized by nonlinear dynamics, (b) three well-defined and distinct long-memory processes exist at different time scales, and (c) autonomic stimulation is partially reflected in nonlinear dynamics. (c) autonomic stimulation is partially reflected in nonlinear dynamics.

  20. Signal Statistics and Maximum Likelihood Sequence Estimation in Intensity Modulated Fiber Optic Links Containing a Single Optical Pre-amplifier.

    PubMed

    Alić, Nikola; Papen, George; Saperstein, Robert; Milstein, Laurence; Fainman, Yeshaiahu

    2005-06-13

    Exact signal statistics for fiber-optic links containing a single optical pre-amplifier are calculated and applied to sequence estimation for electronic dispersion compensation. The performance is evaluated and compared with results based on the approximate chi-square statistics. We show that detection in existing systems based on exact statistics can be improved relative to using a chi-square distribution for realistic filter shapes. In contrast, for high-spectral efficiency systems the difference between the two approaches diminishes, and performance tends to be less dependent on the exact shape of the filter used.

  1. Assessing the significance of pedobarographic signals using random field theory.

    PubMed

    Pataky, Todd C

    2008-08-07

    Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.

  2. Problems Associated with Statistical Pattern Recognition of Acoustic Emission Signals in a Compact Tension Fatigue Specimen

    NASA Technical Reports Server (NTRS)

    Hinton, Yolanda L.

    1999-01-01

    Acoustic emission (AE) data were acquired during fatigue testing of an aluminum 2024-T4 compact tension specimen using a commercially available AE system. AE signals from crack extension were identified and separated from noise spikes, signals that reflected from the specimen edges, and signals that saturated the instrumentation. A commercially available software package was used to train a statistical pattern recognition system to classify the signals. The software trained a network to recognize signals with a 91-percent accuracy when compared with the researcher's interpretation of the data. Reasons for the discrepancies are examined and it is postulated that additional preprocessing of the AE data to focus on the extensional wave mode and eliminate other effects before training the pattern recognition system will result in increased accuracy.

  3. Data mining spacecraft telemetry: towards generic solutions to automatic health monitoring and status characterisation

    NASA Astrophysics Data System (ADS)

    Royer, P.; De Ridder, J.; Vandenbussche, B.; Regibo, S.; Huygen, R.; De Meester, W.; Evans, D. J.; Martinez, J.; Korte-Stapff, M.

    2016-07-01

    We present the first results of a study aimed at finding new and efficient ways to automatically process spacecraft telemetry for automatic health monitoring. The goal is to reduce the load on the flight control team while extending the "checkability" to the entire telemetry database, and provide efficient, robust and more accurate detection of anomalies in near real time. We present a set of effective methods to (a) detect outliers in the telemetry or in its statistical properties, (b) uncover and visualise special properties of the telemetry and (c) detect new behavior. Our results are structured around two main families of solutions. For parameters visiting a restricted set of signal values, i.e. all status parameters and about one third of all the others, we focus on a transition analysis, exploiting properties of Poincare plots. For parameters with an arbitrarily high number of possible signal values, we describe the statistical properties of the signal via its Kernel Density Estimate. We demonstrate that this allows for a generic and dynamic approach of the soft-limit definition. Thanks to a much more accurate description of the signal and of its time evolution, we are more sensitive and more responsive to outliers than the traditional checks against hard limits. Our methods were validated on two years of Venus Express telemetry. They are generic for assisting in health monitoring of any complex system with large amounts of diagnostic sensor data. Not only spacecraft systems but also present-day astronomical observatories can benefit from them.

  4. European Neolithic societies showed early warning signals of population collapse.

    PubMed

    Downey, Sean S; Haas, W Randall; Shennan, Stephen J

    2016-08-30

    Ecosystems on the verge of major reorganization-regime shift-may exhibit declining resilience, which can be detected using a collection of generic statistical tests known as early warning signals (EWSs). This study explores whether EWSs anticipated human population collapse during the European Neolithic. It analyzes recent reconstructions of European Neolithic (8-4 kya) population trends that reveal regime shifts from a period of rapid growth following the introduction of agriculture to a period of instability and collapse. We find statistical support for EWSs in advance of population collapse. Seven of nine regional datasets exhibit increasing autocorrelation and variance leading up to collapse, suggesting that these societies began to recover from perturbation more slowly as resilience declined. We derive EWS statistics from a prehistoric population proxy based on summed archaeological radiocarbon date probability densities. We use simulation to validate our methods and show that sampling biases, atmospheric effects, radiocarbon calibration error, and taphonomic processes are unlikely to explain the observed EWS patterns. The implications of these results for understanding the dynamics of Neolithic ecosystems are discussed, and we present a general framework for analyzing societal regime shifts using EWS at large spatial and temporal scales. We suggest that our findings are consistent with an adaptive cycling model that highlights both the vulnerability and resilience of early European populations. We close by discussing the implications of the detection of EWS in human systems for archaeology and sustainability science.

  5. A blind hierarchical coherent search for gravitational-wave signals from coalescing compact binaries in a network of interferometric detectors

    NASA Astrophysics Data System (ADS)

    Bose, Sukanta; Dayanga, Thilina; Ghosh, Shaon; Talukder, Dipongkar

    2011-07-01

    We describe a hierarchical data analysis pipeline for coherently searching for gravitational-wave signals from non-spinning compact binary coalescences (CBCs) in the data of multiple earth-based detectors. This search assumes no prior information on the sky position of the source or the time of occurrence of its transient signals and, hence, is termed 'blind'. The pipeline computes the coherent network search statistic that is optimal in stationary, Gaussian noise. More importantly, it allows for the computation of a suite of alternative multi-detector coherent search statistics and signal-based discriminators that can improve the performance of CBC searches in real data, which can be both non-stationary and non-Gaussian. Also, unlike the coincident multi-detector search statistics that have been employed so far, the coherent statistics are different in the sense that they check for the consistency of the signal amplitudes and phases in the different detectors with their different orientations and with the signal arrival times in them. Since the computation of coherent statistics entails searching in the sky, it is more expensive than that of the coincident statistics that do not require it. To reduce computational costs, the first stage of the hierarchical pipeline constructs coincidences of triggers from the multiple interferometers, by requiring their proximity in time and component masses. The second stage follows up on these coincident triggers by computing the coherent statistics. Here, we compare the performances of this hierarchical pipeline with and without the second (or coherent) stage in Gaussian noise. Although introducing hierarchy can be expected to cause some degradation in the detection efficiency compared to that of a single-stage coherent pipeline, nevertheless it improves the computational speed of the search considerably. The two main results of this work are as follows: (1) the performance of the hierarchical coherent pipeline on Gaussian data is shown to be better than the pipeline with just the coincident stage; (2) the three-site network of LIGO detectors, in Hanford and Livingston (USA), and Virgo detector in Cascina (Italy) cannot resolve the polarization of waves arriving from certain parts of the sky. This can cause the three-site coherent statistic at those sky positions to become singular. Regularized versions of the statistic can avoid that problem, but can be expected to be sub-optimal. The aforementioned improvement in the pipeline's performance due to the coherent stage is in spite of this handicap.

  6. Improvement in airborne position measurements based on an ultrasonic linear-period-modulated wave by 1-bit signal processing

    NASA Astrophysics Data System (ADS)

    Thong-un, Natee; Hirata, Shinnosuke; Kurosawa, Minoru K.

    2015-07-01

    In this paper, we describe an expansion of the airborne ultrasonic systems for object localization in the three-dimensional spaces of navigation. A system, which revises the microphone arrangement and algorithm, can expand the object-position measurement from +90° in a previous method up to +180° for both the elevation and azimuth angles. The proposed system consists of a sound source and four acoustical receivers. Moreover, the system is designed to utilize low-cost devices, and low-cost computation relying on 1-bit signal processing is used to support the real-time application on a field-programmable gate array (FPGA). An object location is identified using spherical coordinates. A spherical object, which has a curved surface, is considered a target for this system. The transmit pulse to the target is a linear-period-modulated ultrasonic wave with a chirp rate of 50-20 kHz. Statistical evaluation of this work is the experimental investigation under repeatability.

  7. The Macronova in GRB 050709 and the GRB-macronova connection

    PubMed Central

    Jin, Zhi-Ping; Hotokezaka, Kenta; Li, Xiang; Tanaka, Masaomi; D'Avanzo, Paolo; Fan, Yi-Zhong; Covino, Stefano; Wei, Da-Ming; Piran, Tsvi

    2016-01-01

    GRB 050709 was the first short Gamma-ray Burst (sGRB) with an identified optical counterpart. Here we report a reanalysis of the publicly available data of this event and the discovery of a Li-Paczynski macronova/kilonova that dominates the optical/infrared signal at t>2.5 days. Such a signal would arise from 0.05 r-process material launched by a compact binary merger. The implied mass ejection supports the suggestion that compact binary mergers are significant and possibly main sites of heavy r-process nucleosynthesis. Furthermore, we have reanalysed all afterglow data from nearby short and hybrid GRBs (shGRBs). A statistical study of shGRB/macronova connection reveals that macronova may have taken place in all these GRBs, although the fraction as low as 0.18 cannot be ruled out. The identification of two of the three macronova candidates in the I-band implies a more promising detection prospect for ground-based surveys. PMID:27659791

  8. Gr-GDHP: A New Architecture for Globalized Dual Heuristic Dynamic Programming.

    PubMed

    Zhong, Xiangnan; Ni, Zhen; He, Haibo

    2017-10-01

    Goal representation globalized dual heuristic dynamic programming (Gr-GDHP) method is proposed in this paper. A goal neural network is integrated into the traditional GDHP method providing an internal reinforcement signal and its derivatives to help the control and learning process. From the proposed architecture, it is shown that the obtained internal reinforcement signal and its derivatives can be able to adjust themselves online over time rather than a fixed or predefined function in literature. Furthermore, the obtained derivatives can directly contribute to the objective function of the critic network, whose learning process is thus simplified. Numerical simulation studies are applied to show the performance of the proposed Gr-GDHP method and compare the results with other existing adaptive dynamic programming designs. We also investigate this method on a ball-and-beam balancing system. The statistical simulation results are presented for both the Gr-GDHP and the GDHP methods to demonstrate the improved learning and controlling performance.

  9. Transcriptome profiling analysis reveals biomarkers in colon cancer samples of various differentiation

    PubMed Central

    Yu, Tonghu; Zhang, Huaping; Qi, Hong

    2018-01-01

    The aim of the present study was to investigate more colon cancer-related genes in different stages. Gene expression profile E-GEOD-62932 was extracted for differentially expressed gene (DEG) screening. Series test of cluster analysis was used to obtain significant trending models. Based on the Gene Ontology and Kyoto Encyclopedia of Genes and Genomes databases, functional and pathway enrichment analysis were processed and a pathway relation network was constructed. Gene co-expression network and gene signal network were constructed for common DEGs. The DEGs with the same trend were clustered and in total, 16 clusters with statistical significance were obtained. The screened DEGs were enriched into small molecule metabolic process and metabolic pathways. The pathway relation network was constructed with 57 nodes. A total of 328 common DEGs were obtained. Gene signal network was constructed with 71 nodes. Gene co-expression network was constructed with 161 nodes and 211 edges. ABCD3, CPT2, AGL and JAM2 are potential biomarkers for the diagnosis of colon cancer. PMID:29928385

  10. Tracking Drug-induced Changes in Receptor Post-internalization Trafficking by Colocalizational Analysis.

    PubMed

    Ong, Edmund; Cahill, Catherine

    2015-07-03

    The intracellular trafficking of receptors is a collection of complex and highly controlled processes. Receptor trafficking modulates signaling and overall cell responsiveness to ligands and is, itself, influenced by intra- and extracellular conditions, including ligand-induced signaling. Optimized for use with monolayer-plated cultured cells, but extendable to free-floating tissue slices, this protocol uses immunolabelling and colocalizational analysis to track changes in intracellular receptor trafficking following both chronic/prolonged and acute interventions, including exogenous drug treatment. After drug treatment, cells are double-immunolabelled for the receptor and for markers for the intracellular compartments of interest. Sequential confocal microscopy is then used to capture two-channel photomicrographs of individual cells, which are subjected to computerized colocalizational analysis to yield quantitative colocalization scores. These scores are normalized to permit pooling of independent replicates prior to statistical analysis. Representative photomicrographs may also be processed to generate illustrative figures. Here, we describe a powerful and flexible technique for quantitatively assessing induced receptor trafficking.

  11. Data and results of a laboratory investigation of microprocessor upset caused by simulated lightning-induced analog transients

    NASA Technical Reports Server (NTRS)

    Belcastro, C. M.

    1984-01-01

    A methodology was developed a assess the upset susceptibility/reliability of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general purpose microprocessor were studied. The upset tests involved the random input of analog transients which model lightning induced signals onto interface lines of an 8080 based microcomputer from which upset error data was recorded. The program code on the microprocessor during tests is designed to exercise all of the machine cycles and memory addressing techniques implemented in the 8080 central processing unit. A statistical analysis is presented in which possible correlations are established between the probability of upset occurrence and transient signal inputs during specific processing states and operations. A stochastic upset susceptibility model for the 8080 microprocessor is presented. The susceptibility of this microprocessor to upset, once analog transients have entered the system, is determined analytically by calculating the state probabilities of the stochastic model.

  12. Matched signal detection on graphs: Theory and application to brain imaging data classification.

    PubMed

    Hu, Chenhui; Sepulcre, Jorge; Johnson, Keith A; Fakhri, Georges E; Lu, Yue M; Li, Quanzheng

    2016-01-15

    Motivated by recent progress in signal processing on graphs, we have developed a matched signal detection (MSD) theory for signals with intrinsic structures described by weighted graphs. First, we regard graph Laplacian eigenvalues as frequencies of graph-signals and assume that the signal is in a subspace spanned by the first few graph Laplacian eigenvectors associated with lower eigenvalues. The conventional matched subspace detector can be applied to this case. Furthermore, we study signals that may not merely live in a subspace. Concretely, we consider signals with bounded variation on graphs and more general signals that are randomly drawn from a prior distribution. For bounded variation signals, the test is a weighted energy detector. For the random signals, the test statistic is the difference of signal variations on associated graphs, if a degenerate Gaussian distribution specified by the graph Laplacian is adopted. We evaluate the effectiveness of the MSD on graphs both with simulated and real data sets. Specifically, we apply MSD to the brain imaging data classification problem of Alzheimer's disease (AD) based on two independent data sets: 1) positron emission tomography data with Pittsburgh compound-B tracer of 30 AD and 40 normal control (NC) subjects, and 2) resting-state functional magnetic resonance imaging (R-fMRI) data of 30 early mild cognitive impairment and 20 NC subjects. Our results demonstrate that the MSD approach is able to outperform the traditional methods and help detect AD at an early stage, probably due to the success of exploiting the manifold structure of the data. Copyright © 2015. Published by Elsevier Inc.

  13. AGARD Flight Test Techniques Series. Volume 14. Introduction to Flight Test Engineering (Introduction a la Technique d’essais en vol)

    DTIC Science & Technology

    1995-09-01

    path and aircraft attitude and other flight or aircraft parameters • Calculations in the frequency domain ( Fast Fourier Transform) • Data analysis...Signal filtering Image processing of video and radar data Parameter identification Statistical analysis Power spectral density Fast Fourier Transform...airspeeds both fast and slow, altitude, load factor both above and below 1g, centers of gravity (fore and aft), and with system/subsystem failures. Whether

  14. Prognostics

    NASA Technical Reports Server (NTRS)

    Goebel, Kai; Vachtsevanos, George; Orchard, Marcos E.

    2013-01-01

    Knowledge discovery, statistical learning, and more specifically an understanding of the system evolution in time when it undergoes undesirable fault conditions, are critical for an adequate implementation of successful prognostic systems. Prognosis may be understood as the generation of long-term predictions describing the evolution in time of a particular signal of interest or fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem. Predictions are made using a thorough understanding of the underlying processes and factor in the anticipated future usage.

  15. Random sequences generation through optical measurements by phase-shifting interferometry

    NASA Astrophysics Data System (ADS)

    François, M.; Grosges, T.; Barchiesi, D.; Erra, R.; Cornet, A.

    2012-04-01

    The development of new techniques for producing random sequences with a high level of security is a challenging topic of research in modern cryptographics. The proposed method is based on the measurement by phase-shifting interferometry of the speckle signals of the interaction between light and structures. We show how the combination of amplitude and phase distributions (maps) under a numerical process can produce random sequences. The produced sequences satisfy all the statistical requirements of randomness and can be used in cryptographic schemes.

  16. Nonclassical-light generation in a photonic-band-gap nonlinear planar waveguide

    NASA Astrophysics Data System (ADS)

    Peřina, Jan, Jr.; Sibilia, Concita; Tricca, Daniela; Bertolotti, Mario

    2004-10-01

    The optical parametric process occurring in a photonic-band-gap planar waveguide is studied from the point of view of nonclassical-light generation. The nonlinearly interacting optical fields are described by the generalized superposition of coherent signals and noise using the method of operator linear corrections to a classical strong solution. Scattered backward-propagating fields are taken into account. Squeezed light as well as light with sub-Poissonian statistics can be obtained in two-mode fields under the specified conditions.

  17. Automatic Seismic Signal Processing Research.

    DTIC Science & Technology

    1981-09-01

    be used. We then rave mD(k) AT(k) + b (11) S2 aT S SD(k) - a(k) a so Equation (9) becomes ( Gnanadesikan , 1977, p. 83; Young and Calvert, 1974, Equation... Gnanadesikan (1977, p. 196), "The main function of statistical data analysis is to extricate and explicate the informational content of a body of...R. C. Goff (1980), "Evaluation of the MARS Seismic Event Detector," Systems, Science and Software Report SSS-R-81-4656, August. Gnanadesikan , R

  18. Self-domestication in Homo sapiens: Insights from comparative genomics.

    PubMed

    Theofanopoulou, Constantina; Gastaldon, Simone; O'Rourke, Thomas; Samuels, Bridget D; Messner, Angela; Martins, Pedro Tiago; Delogu, Francesco; Alamri, Saleh; Boeckx, Cedric

    2017-01-01

    This study identifies and analyzes statistically significant overlaps between selective sweep screens in anatomically modern humans and several domesticated species. The results obtained suggest that (paleo-)genomic data can be exploited to complement the fossil record and support the idea of self-domestication in Homo sapiens, a process that likely intensified as our species populated its niche. Our analysis lends support to attempts to capture the "domestication syndrome" in terms of alterations to certain signaling pathways and cell lineages, such as the neural crest.

  19. A Novel Estimator for the Rate of Information Transfer by Continuous Signals

    PubMed Central

    Takalo, Jouni; Ignatova, Irina; Weckström, Matti; Vähäsöyrinki, Mikko

    2011-01-01

    The information transfer rate provides an objective and rigorous way to quantify how much information is being transmitted through a communications channel whose input and output consist of time-varying signals. However, current estimators of information content in continuous signals are typically based on assumptions about the system's linearity and signal statistics, or they require prohibitive amounts of data. Here we present a novel information rate estimator without these limitations that is also optimized for computational efficiency. We validate the method with a simulated Gaussian information channel and demonstrate its performance with two example applications. Information transfer between the input and output signals of a nonlinear system is analyzed using a sensory receptor neuron as the model system. Then, a climate data set is analyzed to demonstrate that the method can be applied to a system based on two outputs generated by interrelated random processes. These analyses also demonstrate that the new method offers consistent performance in situations where classical methods fail. In addition to these examples, the method is applicable to a wide range of continuous time series commonly observed in the natural sciences, economics and engineering. PMID:21494562

  20. The Statistical Meaning of Kurtosis and Its New Application to Identification of Persons Based on Seismic Signals

    PubMed Central

    Liang, Zhiqiang; Wei, Jianming; Zhao, Junyu; Liu, Haitao; Li, Baoqing; Shen, Jie; Zheng, Chunlei

    2008-01-01

    This paper presents a new algorithm making use of kurtosis, which is a statistical parameter, to distinguish the seismic signal generated by a person's footsteps from other signals. It is adaptive to any environment and needs no machine study or training. As persons or other targets moving on the ground generate continuous signals in the form of seismic waves, we can separate different targets based on the seismic waves they generate. The parameter of kurtosis is sensitive to impulsive signals, so it's much more sensitive to the signal generated by person footsteps than other signals generated by vehicles, winds, noise, etc. The parameter of kurtosis is usually employed in the financial analysis, but rarely used in other fields. In this paper, we make use of kurtosis to distinguish person from other targets based on its different sensitivity to different signals. Simulation and application results show that this algorithm is very effective in distinguishing person from other targets. PMID:27873804

  1. On the degree distribution of horizontal visibility graphs associated with Markov processes and dynamical systems: diagrammatic and variational approaches

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas

    2014-09-01

    Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein-Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments.

  2. The Measurand Framework: Scaling Exploratory Data Analysis

    NASA Astrophysics Data System (ADS)

    Schneider, D.; MacLean, L. S.; Kappler, K. N.; Bleier, T.

    2017-12-01

    Since 2005 QuakeFinder (QF) has acquired a unique dataset with outstanding spatial and temporal sampling of earth's time varying magnetic field along several active fault systems. This QF network consists of 124 stations in California and 45 stations along fault zones in Greece, Taiwan, Peru, Chile and Indonesia. Each station is equipped with three feedback induction magnetometers, two ion sensors, a 4 Hz geophone, a temperature sensor, and a humidity sensor. Data are continuously recorded at 50 Hz with GPS timing and transmitted daily to the QF data center in California for analysis. QF is attempting to detect and characterize anomalous EM activity occurring ahead of earthquakes. In order to analyze this sizable dataset, QF has developed an analytical framework to support processing the time series input data and hypothesis testing to evaluate the statistical significance of potential precursory signals. The framework was developed with a need to support legacy, in-house processing but with an eye towards big-data processing with Apache Spark and other modern big data technologies. In this presentation, we describe our framework, which supports rapid experimentation and iteration of candidate signal processing techniques via modular data transformation stages, tracking of provenance, and automatic re-computation of downstream data when upstream data is updated. Furthermore, we discuss how the processing modules can be ported to big data platforms like Apache Spark and demonstrate a migration path from local, in-house processing to cloud-friendly processing.

  3. Environmental Statistics and Optimal Regulation

    PubMed Central

    2014-01-01

    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies–such as constitutive expression or graded response–for regulating protein levels in response to environmental inputs. We propose a general framework–here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient–to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones. PMID:25254493

  4. A Volterra series-based method for extracting target echoes in the seafloor mining environment.

    PubMed

    Zhao, Haiming; Ji, Yaqian; Hong, Yujiu; Hao, Qi; Ma, Liyong

    2016-09-01

    The purpose of this research was to evaluate the applicability of the Volterra adaptive method to predict the target echo of an ultrasonic signal in an underwater seafloor mining environment. There is growing interest in mining of seafloor minerals because they offer an alternative source of rare metals. Mining the minerals cause the seafloor sediments to be stirred up and suspended in sea water. In such an environment, the target signals used for seafloor mapping are unable to be detected because of the unavoidable presence of volume reverberation induced by the suspended sediments. The detection of target signals in reverberation is currently performed using a stochastic model (for example, the autoregressive (AR) model) based on the statistical characterisation of reverberation. However, we examined a new method of signal detection in volume reverberation based on the Volterra series by confirming that the reverberation is a chaotic signal and generated by a deterministic process. The advantage of this method over the stochastic model is that attributions of the specific physical process are considered in the signal detection problem. To test the Volterra series based method and its applicability to target signal detection in the volume reverberation environment derived from the seafloor mining process, we simulated the real-life conditions of seafloor mining in a water filled tank of dimensions of 5×3×1.8m. The bottom of the tank was covered with 10cm of an irregular sand layer under which 5cm of an irregular cobalt-rich crusts layer was placed. The bottom was interrogated by an acoustic wave generated as 16μs pulses of 500kHz frequency. This frequency is demonstrated to ensure a resolution on the order of one centimetre, which is adequate in exploration practice. Echo signals were collected with a data acquisition card (PCI 1714 UL, 12-bit). Detection of the target echo in these signals was performed by both the Volterra series based model and the AR model. The results obtained confirm that the Volterra series based method is more efficient in the detection of the signal in reverberation than the conventional AR model (the accuracy is 80% for the PIM-Volterra prediction model versus 40% for the AR model). Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Statistical assessment of bi-exponential diffusion weighted imaging signal characteristics induced by intravoxel incoherent motion in malignant breast tumors

    PubMed Central

    Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.

    2016-01-01

    Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078

  6. Independent component analysis for the extraction of reliable protein signal profiles from MALDI-TOF mass spectra.

    PubMed

    Mantini, Dante; Petrucci, Francesca; Del Boccio, Piero; Pieragostino, Damiana; Di Nicola, Marta; Lugaresi, Alessandra; Federici, Giorgio; Sacchetta, Paolo; Di Ilio, Carmine; Urbani, Andrea

    2008-01-01

    Independent component analysis (ICA) is a signal processing technique that can be utilized to recover independent signals from a set of their linear mixtures. We propose ICA for the analysis of signals obtained from large proteomics investigations such as clinical multi-subject studies based on MALDI-TOF MS profiling. The method is validated on simulated and experimental data for demonstrating its capability of correctly extracting protein profiles from MALDI-TOF mass spectra. The comparison on peak detection with an open-source and two commercial methods shows its superior reliability in reducing the false discovery rate of protein peak masses. Moreover, the integration of ICA and statistical tests for detecting the differences in peak intensities between experimental groups allows to identify protein peaks that could be indicators of a diseased state. This data-driven approach demonstrates to be a promising tool for biomarker-discovery studies based on MALDI-TOF MS technology. The MATLAB implementation of the method described in the article and both simulated and experimental data are freely available at http://www.unich.it/proteomica/bioinf/.

  7. EDA-gram: designing electrodermal activity fingerprints for visualization and feature extraction.

    PubMed

    Chaspari, Theodora; Tsiartas, Andreas; Stein Duker, Leah I; Cermak, Sharon A; Narayanan, Shrikanth S

    2016-08-01

    Wearable technology permeates every aspect of our daily life increasing the need of reliable and interpretable models for processing the large amount of biomedical data. We propose the EDA-Gram, a multidimensional fingerprint of the electrodermal activity (EDA) signal, inspired by the widely-used notion of spectrogram. The EDA-Gram is based on the sparse decomposition of EDA from a knowledge-driven set of dictionary atoms. The time axis reflects the analysis frames, the spectral dimension depicts the width of selected dictionary atoms, while intensity values are computed from the atom coefficients. In this way, EDA-Gram incorporates the amplitude and shape of Skin Conductance Responses (SCR), which comprise an essential part of the signal. EDA-Gram is further used as a foundation for signal-specific feature design. Our results indicate that the proposed representation can accentuate fine-grain signal fluctuations, which might not always be apparent through simple visual inspection. Statistical analysis and classification/regression experiments further suggest that the derived features can differentiate between multiple arousal levels and stress-eliciting environments for two datasets.

  8. Modulation Classification of Satellite Communication Signals Using Cumulants and Neural Networks

    NASA Technical Reports Server (NTRS)

    Smith, Aaron; Evans, Michael; Downey, Joseph

    2017-01-01

    National Aeronautics and Space Administration (NASA)'s future communication architecture is evaluating cognitive technologies and increased system intelligence. These technologies are expected to reduce the operational complexity of the network, increase science data return, and reduce interference to self and others. In order to increase situational awareness, signal classification algorithms could be applied to identify users and distinguish sources of interference. A significant amount of previous work has been done in the area of automatic signal classification for military and commercial applications. As a preliminary step, we seek to develop a system with the ability to discern signals typically encountered in satellite communication. Proposed is an automatic modulation classifier which utilizes higher order statistics (cumulants) and an estimate of the signal-to-noise ratio. These features are extracted from baseband symbols and then processed by a neural network for classification. The modulation types considered are phase-shift keying (PSK), amplitude and phase-shift keying (APSK),and quadrature amplitude modulation (QAM). Physical layer properties specific to the Digital Video Broadcasting - Satellite- Second Generation (DVB-S2) standard, such as pilots and variable ring ratios, are also considered. This paper will provide simulation results of a candidate modulation classifier, and performance will be evaluated over a range of signal-to-noise ratios, frequency offsets, and nonlinear amplifier distortions.

  9. Gain statistics of a fiber optical parametric amplifier with a temporally incoherent pump.

    PubMed

    Xu, Y Q; Murdoch, S G

    2010-03-15

    We present an investigation of the statistics of the gain fluctuations of a fiber optical parametric amplifier pumped with a temporally incoherent pump. We derive a simple expression for the probability distribution of the gain of the amplified optical signal. The gain statistics are shown to be a strong function of the signal detuning and allow the possibility of generating optical gain distributions with controllable long-tails. Very good agreement is found between this theory and the experimentally measured gain distributions of an incoherently pumped amplifier.

  10. Data analysis of gravitational-wave signals from spinning neutron stars. III. Detection statistics and computational requirements

    NASA Astrophysics Data System (ADS)

    Jaranowski, Piotr; Królak, Andrzej

    2000-03-01

    We develop the analytic and numerical tools for data analysis of the continuous gravitational-wave signals from spinning neutron stars for ground-based laser interferometric detectors. The statistical data analysis method that we investigate is maximum likelihood detection which for the case of Gaussian noise reduces to matched filtering. We study in detail the statistical properties of the optimum functional that needs to be calculated in order to detect the gravitational-wave signal and estimate its parameters. We find it particularly useful to divide the parameter space into elementary cells such that the values of the optimal functional are statistically independent in different cells. We derive formulas for false alarm and detection probabilities both for the optimal and the suboptimal filters. We assess the computational requirements needed to do the signal search. We compare a number of criteria to build sufficiently accurate templates for our data analysis scheme. We verify the validity of our concepts and formulas by means of the Monte Carlo simulations. We present algorithms by which one can estimate the parameters of the continuous signals accurately. We find, confirming earlier work of other authors, that given a 100 Gflops computational power an all-sky search for observation time of 7 days and directed search for observation time of 120 days are possible whereas an all-sky search for 120 days of observation time is computationally prohibitive.

  11. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  12. On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design -- using orthogonal arrays to compute signal-to-noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling whereby statistical models are built to approximate detailed computer analysis codes. Although computers continue to get faster, analysis codes always seem to keep pace so that their computational time remains non-trivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating analysis codes across different domains. In this paper we first review metamodeling techniques that encompass design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.

  13. Functional annotation of regulatory pathways.

    PubMed

    Pandey, Jayesh; Koyutürk, Mehmet; Kim, Yohan; Szpankowski, Wojciech; Subramaniam, Shankar; Grama, Ananth

    2007-07-01

    Standardized annotations of biomolecules in interaction networks (e.g. Gene Ontology) provide comprehensive understanding of the function of individual molecules. Extending such annotations to pathways is a critical component of functional characterization of cellular signaling at the systems level. We propose a framework for projecting gene regulatory networks onto the space of functional attributes using multigraph models, with the objective of deriving statistically significant pathway annotations. We first demonstrate that annotations of pairwise interactions do not generalize to indirect relationships between processes. Motivated by this result, we formalize the problem of identifying statistically overrepresented pathways of functional attributes. We establish the hardness of this problem by demonstrating the non-monotonicity of common statistical significance measures. We propose a statistical model that emphasizes the modularity of a pathway, evaluating its significance based on the coupling of its building blocks. We complement the statistical model by an efficient algorithm and software, Narada, for computing significant pathways in large regulatory networks. Comprehensive results from our methods applied to the Escherichia coli transcription network demonstrate that our approach is effective in identifying known, as well as novel biological pathway annotations. Narada is implemented in Java and is available at http://www.cs.purdue.edu/homes/jpandey/narada/.

  14. Statistical learning of novel graphotactic constraints in children and adults.

    PubMed

    Samara, Anna; Caravolas, Markéta

    2014-05-01

    The current study explored statistical learning processes in the acquisition of orthographic knowledge in school-aged children and skilled adults. Learning of novel graphotactic constraints on the position and context of letter distributions was induced by means of a two-phase learning task adapted from Onishi, Chambers, and Fisher (Cognition, 83 (2002) B13-B23). Following incidental exposure to pattern-embedding stimuli in Phase 1, participants' learning generalization was tested in Phase 2 with legality judgments about novel conforming/nonconforming word-like strings. Test phase performance was above chance, suggesting that both types of constraints were reliably learned even after relatively brief exposure. As hypothesized, signal detection theory d' analyses confirmed that learning permissible letter positions (d'=0.97) was easier than permissible neighboring letter contexts (d'=0.19). Adults were more accurate than children in all but a strict analysis of the contextual constraints condition. Consistent with the statistical learning perspective in literacy, our results suggest that statistical learning mechanisms contribute to children's and adults' acquisition of knowledge about graphotactic constraints similar to those existing in their orthography. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  16. Statistical modeling of natural backgrounds in hyperspectral LWIR data

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph

    2016-09-01

    Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.

  17. Application of artificial neural network to search for gravitational-wave signals associated with short gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Kim, Kyungmin; Harry, Ian W.; Hodge, Kari A.; Kim, Young-Min; Lee, Chang-Hwan; Lee, Hyun Kyu; Oh, John J.; Oh, Sang Hoon; Son, Edwin J.

    2015-12-01

    We apply a machine learning algorithm, the artificial neural network, to the search for gravitational-wave signals associated with short gamma-ray bursts (GRBs). The multi-dimensional samples consisting of data corresponding to the statistical and physical quantities from the coherent search pipeline are fed into the artificial neural network to distinguish simulated gravitational-wave signals from background noise artifacts. Our result shows that the data classification efficiency at a fixed false alarm probability (FAP) is improved by the artificial neural network in comparison to the conventional detection statistic. Specifically, the distance at 50% detection probability at a fixed false positive rate is increased about 8%-14% for the considered waveform models. We also evaluate a few seconds of the gravitational-wave data segment using the trained networks and obtain the FAP. We suggest that the artificial neural network can be a complementary method to the conventional detection statistic for identifying gravitational-wave signals related to the short GRBs.

  18. Untargeted metabolomics analysis reveals dynamic changes in azelaic acid- and salicylic acid derivatives in LPS-treated Nicotiana tabacum cells.

    PubMed

    Mhlongo, M I; Tugizimana, F; Piater, L A; Steenkamp, P A; Madala, N E; Dubery, I A

    2017-01-22

    To counteract biotic stress factors, plants employ multilayered defense mechanisms responsive to pathogen-derived elicitor molecules, and regulated by different phytohormones and signaling molecules. Here, lipopolysaccharide (LPS), a microbe-associated molecular pattern (MAMP) molecule, was used to induce defense responses in Nicotiana tabacum cell suspensions. Intracellular metabolites were extracted with methanol and analyzed using a liquid chromatography-mass spectrometry (UHPLC-qTOF-MS/MS) platform. The generated data were processed and examined with multivariate and univariate statistical tools. The results show time-dependent dynamic changes and accumulation of glycosylated signaling molecules, specifically those of azelaic acid, salicylic acid and methyl-salicylate as contributors to the altered metabolomic state in LPS-treated cells. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. A CLT on the SNR of Diagonally Loaded MVDR Filters

    NASA Astrophysics Data System (ADS)

    Rubio, Francisco; Mestre, Xavier; Hachem, Walid

    2012-08-01

    This paper studies the fluctuations of the signal-to-noise ratio (SNR) of minimum variance distorsionless response (MVDR) filters implementing diagonal loading in the estimation of the covariance matrix. Previous results in the signal processing literature are generalized and extended by considering both spatially as well as temporarily correlated samples. Specifically, a central limit theorem (CLT) is established for the fluctuations of the SNR of the diagonally loaded MVDR filter, under both supervised and unsupervised training settings in adaptive filtering applications. Our second-order analysis is based on the Nash-Poincar\\'e inequality and the integration by parts formula for Gaussian functionals, as well as classical tools from statistical asymptotic theory. Numerical evaluations validating the accuracy of the CLT confirm the asymptotic Gaussianity of the fluctuations of the SNR of the MVDR filter.

  20. Discrete photon statistics from continuous microwave measurements

    NASA Astrophysics Data System (ADS)

    Virally, Stéphane; Simoneau, Jean Olivier; Lupien, Christian; Reulet, Bertrand

    2016-04-01

    Photocount statistics are an important tool for the characterization of electromagnetic fields, especially for fields with an irrelevant phase. In the microwave domain, continuous rather than discrete measurements are the norm. Using a different approach, we recover discrete photon statistics from the cumulants of a continuous distribution of field quadrature measurements. The use of cumulants allows the separation between the signal of interest and experimental noise. Using a parametric amplifier as the first stage of the amplification chain, we extract useful data from up to the sixth cumulant of the continuous distribution of a coherent field, hence recovering up to the third moment of the discrete statistics associated with a signal with much less than one average photon.

  1. Towards simulating and quantifying the light-cone EoR 21-cm signal

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Datta, Kanan K.

    2018-02-01

    The light-cone (LC) effect causes the Epoch of Reionization (EoR) 21-cm signal T_b (\\hat{n}, ν ) to evolve significantly along the line-of-sight (LoS) direction ν. In the first part of this paper, we present a method to properly incorporate the LC effect in simulations of the EoR 21-cm signal that includes peculiar velocities. Subsequently, we discuss how to quantify the second-order statistics of the EoR 21-cm signal in the presence of the LC effect. We demonstrate that the 3D power spectrum P(k) fails to quantify the entire information because it assumes the signal to be ergodic and periodic, whereas the LC effect breaks these conditions along the LoS. Considering a LC simulation centred at redshift 8 where the mean neutral fraction drops from 0.65 to 0.35 across the box, we find that P(k) misses out ˜ 40 per cent of the information at the two ends of the 17.41 MHz simulation bandwidth. The multifrequency angular power spectrum (MAPS) C_{ℓ}(ν_1,ν_2) quantifies the statistical properties of T_b (\\hat{n}, ν ) without assuming the signal to be ergodic and periodic along the LoS. We expect this to quantify the entire statistical information of the EoR 21-cm signal. We apply MAPS to our LC simulation and present preliminary results for the EoR 21-cm signal.

  2. Good pharmacovigilance practices: technology enabled.

    PubMed

    Nelson, Robert C; Palsulich, Bruce; Gogolak, Victor

    2002-01-01

    The assessment of spontaneous reports is most effective it is conducted within a defined and rigorous process. The framework for good pharmacovigilance process (GPVP) is proposed as a subset of good postmarketing surveillance process (GPMSP), a functional structure for both a public health and corporate risk management strategy. GPVP has good practices that implement each step within a defined process. These practices are designed to efficiently and effectively detect and alert the drug safety professional to new and potentially important information on drug-associated adverse reactions. These practices are enabled by applied technology designed specifically for the review and assessment of spontaneous reports. Specific practices include rules-based triage, active query prompts for severe organ insults, contextual single case evaluation, statistical proportionality and correlational checks, case-series analyses, and templates for signal work-up and interpretation. These practices and the overall GPVP are supported by state-of-the-art web-based systems with powerful analytical engines, workflow and audit trials to allow validated systems support for valid drug safety signalling efforts. It is also important to understand that a process has a defined set of steps and any one cannot stand independently. Specifically, advanced use of technical alerting methods in isolation can mislead and allow one to misunderstand priorities and relative value. In the end, pharmacovigilance is a clinical art and a component process to the science of pharmacoepidemiology and risk management.

  3. Measurement of the absolute branching fraction of Ds0 *(2317 )±→π0Ds±

    NASA Astrophysics Data System (ADS)

    Ablikim, M.; Achasov, M. N.; Ahmed, S.; Albrecht, M.; Amoroso, A.; An, F. F.; An, Q.; Bai, J. Z.; Bai, Y.; Bakina, O.; Baldini Ferroli, R.; Ban, Y.; Bennett, D. W.; Bennett, J. V.; Berger, N.; Bertani, M.; Bettoni, D.; Bian, J. M.; Bianchi, F.; Boger, E.; Boyko, I.; Briere, R. A.; Cai, H.; Cai, X.; Cakir, O.; Calcaterra, A.; Cao, G. F.; Cetin, S. A.; Chai, J.; Chang, J. F.; Chelkov, G.; Chen, G.; Chen, H. S.; Chen, J. C.; Chen, M. L.; Chen, P. L.; Chen, S. J.; Chen, X. R.; Chen, Y. B.; Chu, X. K.; Cibinetto, G.; Dai, H. L.; Dai, J. P.; Dbeyssi, A.; Dedovich, D.; Deng, Z. Y.; Denig, A.; Denysenko, I.; Destefanis, M.; de Mori, F.; Ding, Y.; Dong, C.; Dong, J.; Dong, L. Y.; Dong, M. Y.; Dou, Z. L.; Du, S. X.; Duan, P. F.; Fang, J.; Fang, S. S.; Fang, X.; Fang, Y.; Farinelli, R.; Fava, L.; Fegan, S.; Feldbauer, F.; Felici, G.; Feng, C. Q.; Fioravanti, E.; Fritsch, M.; Fu, C. D.; Gao, Q.; Gao, X. L.; Gao, Y.; Gao, Y. G.; Gao, Z.; Garzia, I.; Goetzen, K.; Gong, L.; Gong, W. X.; Gradl, W.; Greco, M.; Gu, M. H.; Gu, S.; Gu, Y. T.; Guo, A. Q.; Guo, L. B.; Guo, R. P.; Guo, Y. P.; Haddadi, Z.; Han, S.; Hao, X. Q.; Harris, F. A.; He, K. L.; He, X. Q.; Heinsius, F. H.; Held, T.; Heng, Y. K.; Holtmann, T.; Hou, Z. L.; Hu, C.; Hu, H. M.; Hu, T.; Hu, Y.; Huang, G. S.; Huang, J. S.; Huang, X. T.; Huang, X. Z.; Huang, Z. L.; Hussain, T.; Ikegami Andersson, W.; Ji, Q.; Ji, Q. P.; Ji, X. B.; Ji, X. L.; Jiang, X. S.; Jiang, X. Y.; Jiao, J. B.; Jiao, Z.; Jin, D. P.; Jin, S.; Jin, Y.; Johansson, T.; Julin, A.; Kalantar-Nayestanaki, N.; Kang, X. L.; Kang, X. S.; Kavatsyuk, M.; Ke, B. C.; Khan, T.; Khoukaz, A.; Kiese, P.; Kliemt, R.; Koch, L.; Kolcu, O. B.; Kopf, B.; Kornicer, M.; Kuemmel, M.; Kuessner, M.; Kuhlmann, M.; Kupsc, A.; Kühn, W.; Lange, J. S.; Lara, M.; Larin, P.; Lavezzi, L.; Leiber, S.; Leithoff, H.; Leng, C.; Li, C.; Li, Cheng; Li, D. M.; Li, F.; Li, F. Y.; Li, G.; Li, H. B.; Li, H. J.; Li, J. C.; Li, J. Q.; Li, K. J.; Li, Kang; Li, Ke; Li, Lei; Li, P. L.; Li, P. R.; Li, Q. Y.; Li, T.; Li, W. D.; Li, W. G.; Li, X. L.; Li, X. N.; Li, X. Q.; Li, Z. B.; Liang, H.; Liang, Y. F.; Liang, Y. T.; Liao, G. R.; Lin, D. X.; Liu, B.; Liu, B. J.; Liu, C. X.; Liu, D.; Liu, F. H.; Liu, Fang; Liu, Feng; Liu, H. B.; Liu, H. M.; Liu, Huanhuan; Liu, Huihui; Liu, J. B.; Liu, J. P.; Liu, J. Y.; Liu, K.; Liu, K. Y.; Liu, Ke; Liu, L. D.; Liu, P. L.; Liu, Q.; Liu, S. B.; Liu, X.; Liu, Y. B.; Liu, Z. A.; Liu, Zhiqing; Long, Y. F.; Lou, X. C.; Lu, H. J.; Lu, J. G.; Lu, Y.; Lu, Y. P.; Luo, C. L.; Luo, M. X.; Luo, X. L.; Lyu, X. R.; Ma, F. C.; Ma, H. L.; Ma, L. L.; Ma, M. M.; Ma, Q. M.; Ma, T.; Ma, X. N.; Ma, X. Y.; Ma, Y. M.; Maas, F. E.; Maggiora, M.; Malik, Q. A.; Mao, Y. J.; Mao, Z. P.; Marcello, S.; Meng, Z. X.; Messchendorp, J. G.; Mezzadri, G.; Min, J.; Min, T. J.; Mitchell, R. E.; Mo, X. H.; Mo, Y. J.; Morales Morales, C.; Morello, G.; Muchnoi, N. Yu.; Muramatsu, H.; Musiol, P.; Mustafa, A.; Nefedov, Y.; Nerling, F.; Nikolaev, I. B.; Ning, Z.; Nisar, S.; Niu, S. L.; Niu, X. Y.; Olsen, S. L.; Ouyang, Q.; Pacetti, S.; Pan, Y.; Papenbrock, M.; Patteri, P.; Pelizaeus, M.; Pellegrino, J.; Peng, H. P.; Peters, K.; Pettersson, J.; Ping, J. L.; Ping, R. G.; Pitka, A.; Poling, R.; Prasad, V.; Qi, H. R.; Qi, M.; Qian, S.; Qiao, C. F.; Qin, N.; Qin, X. S.; Qin, Z. H.; Qiu, J. F.; Rashid, K. H.; Redmer, C. F.; Richter, M.; Ripka, M.; Rolo, M.; Rong, G.; Rosner, Ch.; Ruan, X. D.; Sarantsev, A.; Savrié, M.; Schnier, C.; Schoenning, K.; Shan, W.; Shao, M.; Shen, C. P.; Shen, P. X.; Shen, X. Y.; Sheng, H. Y.; Song, J. J.; Song, W. M.; Song, X. Y.; Sosio, S.; Sowa, C.; Spataro, S.; Sun, G. X.; Sun, J. F.; Sun, L.; Sun, S. S.; Sun, X. H.; Sun, Y. J.; Sun, Y. K.; Sun, Y. Z.; Sun, Z. J.; Sun, Z. T.; Tang, C. J.; Tang, G. Y.; Tang, X.; Tapan, I.; Tiemens, M.; Tsednee, B.; Uman, I.; Varner, G. S.; Wang, B.; Wang, B. L.; Wang, D.; Wang, D. Y.; Wang, Dan; Wang, K.; Wang, L. L.; Wang, L. S.; Wang, M.; Wang, Meng; Wang, P.; Wang, P. L.; Wang, W. P.; Wang, X. F.; Wang, Y.; Wang, Y. D.; Wang, Y. F.; Wang, Y. Q.; Wang, Z.; Wang, Z. G.; Wang, Z. H.; Wang, Z. Y.; Wang, Zongyuan; Weber, T.; Wei, D. H.; Weidenkaff, P.; Wen, S. P.; Wiedner, U.; Wolke, M.; Wu, L. H.; Wu, L. J.; Wu, Z.; Xia, L.; Xia, X.; Xia, Y.; Xiao, D.; Xiao, H.; Xiao, Y. J.; Xiao, Z. J.; Xie, Y. G.; Xie, Y. H.; Xiong, X. A.; Xiu, Q. L.; Xu, G. F.; Xu, J. J.; Xu, L.; Xu, Q. J.; Xu, Q. N.; Xu, X. P.; Yan, L.; Yan, W. B.; Yan, W. C.; Yan, W. C.; Yan, Y. H.; Yang, H. J.; Yang, H. X.; Yang, L.; Yang, Y. H.; Yang, Y. X.; Yang, Yifan; Ye, M.; Ye, M. H.; Yin, J. H.; You, Z. Y.; Yu, B. X.; Yu, C. X.; Yu, J. S.; Yuan, C. Z.; Yuan, Y.; Yuncu, A.; Zafar, A. A.; Zallo, A.; Zeng, Y.; Zeng, Z.; Zhang, B. X.; Zhang, B. Y.; Zhang, C. C.; Zhang, D. H.; Zhang, H. H.; Zhang, H. Y.; Zhang, J.; Zhang, J. L.; Zhang, J. Q.; Zhang, J. W.; Zhang, J. Y.; Zhang, J. Z.; Zhang, K.; Zhang, L.; Zhang, S. Q.; Zhang, X. Y.; Zhang, Y. H.; Zhang, Y. T.; Zhang, Yang; Zhang, Yao; Zhang, Yu; Zhang, Z. H.; Zhang, Z. P.; Zhang, Z. Y.; Zhao, G.; Zhao, J. W.; Zhao, J. Y.; Zhao, J. Z.; Zhao, Lei; Zhao, Ling; Zhao, M. G.; Zhao, Q.; Zhao, S. J.; Zhao, T. C.; Zhao, Y. B.; Zhao, Z. G.; Zhemchugov, A.; Zheng, B.; Zheng, J. P.; Zheng, W. J.; Zheng, Y. H.; Zhong, B.; Zhou, L.; Zhou, X.; Zhou, X. K.; Zhou, X. R.; Zhou, X. Y.; Zhou, Y. X.; Zhu, J.; Zhu, J.; Zhu, K.; Zhu, K. J.; Zhu, S.; Zhu, S. H.; Zhu, X. L.; Zhu, Y. C.; Zhu, Y. S.; Zhu, Z. A.; Zhuang, J.; Zou, B. S.; Zou, J. H.; Besiii Collaboration

    2018-03-01

    The process e+e-→Ds*+Ds0 *(2317 )-+c .c . is observed for the first time with the data sample of 567 pb-1 collected with the BESIII detector operating at the BEPCII collider at a center-of-mass energy √{s }=4.6 GeV . The statistical significance of the Ds0 *(2317 )± signal is 5.8 σ and the mass is measured to be (2318.3 ±1.2 ±1.2 ) MeV /c2 . The absolute branching fraction B (Ds0 *(2317 )±→π0Ds±) is measured as 1.00-0.14+0.00(stat)-0.14+0.00(syst) for the first time. The uncertainties are statistical and systematic, respectively.

  4. Fronthaul evolution: From CPRI to Ethernet

    NASA Astrophysics Data System (ADS)

    Gomes, Nathan J.; Chanclou, Philippe; Turnbull, Peter; Magee, Anthony; Jungnickel, Volker

    2015-12-01

    It is proposed that using Ethernet in the fronthaul, between base station baseband unit (BBU) pools and remote radio heads (RRHs), can bring a number of advantages, from use of lower-cost equipment, shared use of infrastructure with fixed access networks, to obtaining statistical multiplexing and optimised performance through probe-based monitoring and software-defined networking. However, a number of challenges exist: ultra-high-bit-rate requirements from the transport of increased bandwidth radio streams for multiple antennas in future mobile networks, and low latency and jitter to meet delay requirements and the demands of joint processing. A new fronthaul functional division is proposed which can alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains. Delay and synchronisation issues remain to be solved.

  5. Statistical Properties of a Two-Stage Procedure for Creating Sky Flats

    NASA Astrophysics Data System (ADS)

    Crawford, R. W.; Trueblood, M.

    2004-05-01

    Accurate flat fielding is an essential factor in image calibration and good photometry, yet no single method for creating flat fields is both practical and effective in all cases. At Winer Observatory, robotic telescope opera- tion and the research program of Near Earth Object follow-up astrometry favor the use of sky flats formed from the many images that are acquired during a night. This paper reviews the statistical properties of the median-combine process used to create sky flats and discusses a computationally efficient procedure for two-stage combining of many images to form sky flats with relatively high signal-to-noise ratio (SNR). This procedure is in use at Winer for the flat field calibration of unfiltered images taken for NEO follow-up astrometry.

  6. A motion-tolerant approach for monitoring SpO2 and heart rate using photoplethysmography signal with dual frame length processing and multi-classifier fusion.

    PubMed

    Fan, Feiyi; Yan, Yuepeng; Tang, Yongzhong; Zhang, Hao

    2017-12-01

    Monitoring pulse oxygen saturation (SpO 2 ) and heart rate (HR) using photoplethysmography (PPG) signal contaminated by a motion artifact (MA) remains a difficult problem, especially when the oximeter is not equipped with a 3-axis accelerometer for adaptive noise cancellation. In this paper, we report a pioneering investigation on the impact of altering the frame length of Molgedey and Schuster independent component analysis (ICAMS) on performance, design a multi-classifier fusion strategy for selecting the PPG correlated signal component, and propose a novel approach to extract SpO 2 and HR readings from PPG signal contaminated by strong MA interference. The algorithm comprises multiple stages, including dual frame length ICAMS, a multi-classifier-based PPG correlated component selector, line spectral analysis, tree-based HR monitoring, and post-processing. Our approach is evaluated by multi-subject tests. The root mean square error (RMSE) is calculated for each trial. Three statistical metrics are selected as performance evaluation criteria: mean RMSE, median RMSE and the standard deviation (SD) of RMSE. The experimental results demonstrate that a shorter ICAMS analysis window probably results in better performance in SpO 2 estimation. Notably, the designed multi-classifier signal component selector achieved satisfactory performance. The subject tests indicate that our algorithm outperforms other baseline methods regarding accuracy under most criteria. The proposed work can contribute to improving the performance of current pulse oximetry and personal wearable monitoring devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Trial-Level Regressor Modulation for Functional Magnetic Resonance Imaging Designs Requiring Strict Periodicity of Stimulus Presentations: Illustrated Using a Go/No-Go Task

    PubMed Central

    Motes, Michael A; Rao, Neena K; Shokri-Kojori, Ehsan; Chiang, Hsueh-Sheng; Kraut, Michael A; Hart, John

    2017-01-01

    Computer-based assessment of many cognitive processes (eg, anticipatory and response readiness processes) requires the use of invariant stimulus display times (SDT) and intertrial intervals (ITI). Although designs with invariant SDTs and ITIs have been used in functional magnetic resonance imaging (fMRI) research, such designs are problematic for fMRI studies because of collinearity issues. This study examined regressor modulation with trial-level reaction times (RT) as a method for improving signal detection in a go/no-go task with invariant SDTs and ITIs. The effects of modulating the go regressor were evaluated with respect to the detection of BOLD signal-change for the no-go condition. BOLD signal-change to no-go stimuli was examined when the go regressor was based on a (a) canonical hemodynamic response function (HRF), (b) RT-based amplitude-modulated (AM) HRF, and (c) RT-based amplitude and duration modulated (A&DM) HRF. Reaction time–based modulation reduced the collinearity between the go and no-go regressors, with A&DM producing the greatest reductions in correlations between the regressors, and greater reductions in the correlations between regressors were associated with longer mean RTs and greater RT variability. Reaction time–based modulation increased statistical power for detecting group-level no-go BOLD signal-change across a broad set of brain regions. The findings show the efficacy of using regressor modulation to increase power in detecting BOLD signal-change in fMRI studies in which circumstances dictate the use of temporally invariant stimulus presentations. PMID:29276390

  8. Trial-Level Regressor Modulation for Functional Magnetic Resonance Imaging Designs Requiring Strict Periodicity of Stimulus Presentations: Illustrated Using a Go/No-Go Task.

    PubMed

    Motes, Michael A; Rao, Neena K; Shokri-Kojori, Ehsan; Chiang, Hsueh-Sheng; Kraut, Michael A; Hart, John

    2017-01-01

    Computer-based assessment of many cognitive processes (eg, anticipatory and response readiness processes) requires the use of invariant stimulus display times (SDT) and intertrial intervals (ITI). Although designs with invariant SDTs and ITIs have been used in functional magnetic resonance imaging (fMRI) research, such designs are problematic for fMRI studies because of collinearity issues. This study examined regressor modulation with trial-level reaction times (RT) as a method for improving signal detection in a go / no-go task with invariant SDTs and ITIs. The effects of modulating the go regressor were evaluated with respect to the detection of BOLD signal-change for the no-go condition. BOLD signal-change to no-go stimuli was examined when the go regressor was based on a (a) canonical hemodynamic response function (HRF), (b) RT-based amplitude-modulated (AM) HRF, and (c) RT-based amplitude and duration modulated (A&DM) HRF. Reaction time-based modulation reduced the collinearity between the go and no-go regressors, with A&DM producing the greatest reductions in correlations between the regressors, and greater reductions in the correlations between regressors were associated with longer mean RTs and greater RT variability. Reaction time-based modulation increased statistical power for detecting group-level no-go BOLD signal-change across a broad set of brain regions. The findings show the efficacy of using regressor modulation to increase power in detecting BOLD signal-change in fMRI studies in which circumstances dictate the use of temporally invariant stimulus presentations.

  9. The use of vision-based image quality metrics to predict low-light performance of camera phones

    NASA Astrophysics Data System (ADS)

    Hultgren, B.; Hertel, D.

    2010-01-01

    Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.

  10. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-08-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  11. Study of composition of espresso coffee prepared from various roast degrees of Coffea arabica L. coffee beans.

    PubMed

    Kučera, Lukáš; Papoušek, Roman; Kurka, Ondřej; Barták, Petr; Bednář, Petr

    2016-05-15

    Espresso coffee samples prepared at various roasting degrees defined according to its basic conventional classification (light, medium, medium-dark and dark roasted) were analyzed by ultra-performance liquid chromatography/tandem mass spectrometry. Obtained raw data were processed using multivariate statistical analysis (Principal Component Analysis, PCA) to evaluate chemical differences between each roasting degrees (untargeted part of study). All four roasting degrees were resolved in appropriate Score plot. Orthogonal Projections to Latent Structures provided signals of significant markers describing the differences among particular roasting degrees. Detailed interpretation of those signals by targeted LC/MS(2) analysis revealed four groups of compounds. The first two groups involve chlorogenic acids and related lactones. The signals of other two sets of markers were ascribed to some specific atractylosides and particular melanoidins. Ratios of contents of selected representatives of each group to the sum of all identified markers were proposed as definite parameters for determination of roasting degree of Brazilian coffee Arabica. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision

    PubMed Central

    Yang, Bingwei; Xie, Xinhao; Li, Duan

    2018-01-01

    Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639

  13. Predicting the performance of linear optical detectors in free space laser communication links

    NASA Astrophysics Data System (ADS)

    Farrell, Thomas C.

    2018-05-01

    While the fundamental performance limit for optical communications is set by the quantum nature of light, in practical systems background light, dark current, and thermal noise of the electronics also degrade performance. In this paper, we derive a set of equations predicting the performance of PIN diodes and linear mode avalanche photo diodes (APDs) in the presence of such noise sources. Electrons generated by signal, background, and dark current shot noise are well modeled in PIN diodes as Poissonian statistical processes. In APDs, on the other hand, the amplifying effects of the device result in statistics that are distinctly non-Poissonian. Thermal noise is well modeled as Gaussian. In this paper, we appeal to the central limit theorem and treat both the variability of the signal and the sum of noise sources as Gaussian. Comparison against Monte-Carlo simulation of PIN diode performance (where we do model shot noise with draws from a Poissonian distribution) validates the legitimacy of this approximation. On-off keying, M-ary pulse position, and binary differential phase shift keying modulation are modeled. We conclude with examples showing how the equations may be used in a link budget to estimate the performance of optical links using linear receivers.

  14. Multifractal Properties of Process Control Variables

    NASA Astrophysics Data System (ADS)

    Domański, Paweł D.

    2017-06-01

    Control system is an inevitable element of any industrial installation. Its quality affects overall process performance significantly. The assessment, whether control system needs any improvement or not, requires relevant and constructive measures. There are various methods, like time domain based, Minimum Variance, Gaussian and non-Gaussian statistical factors, fractal and entropy indexes. Majority of approaches use time series of control variables. They are able to cover many phenomena. But process complexities and human interventions cause effects that are hardly visible for standard measures. It is shown that the signals originating from industrial installations have multifractal properties and such an analysis may extend standard approach to further observations. The work is based on industrial and simulation data. The analysis delivers additional insight into the properties of control system and the process. It helps to discover internal dependencies and human factors, which are hardly detectable.

  15. On-line Machine Learning and Event Detection in Petascale Data Streams

    NASA Astrophysics Data System (ADS)

    Thompson, David R.; Wagstaff, K. L.

    2012-01-01

    Traditional statistical data mining involves off-line analysis in which all data are available and equally accessible. However, petascale datasets have challenged this premise since it is often impossible to store, let alone analyze, the relevant observations. This has led the machine learning community to investigate adaptive processing chains where data mining is a continuous process. Here pattern recognition permits triage and followup decisions at multiple stages of a processing pipeline. Such techniques can also benefit new astronomical instruments such as the Large Synoptic Survey Telescope (LSST) and Square Kilometre Array (SKA) that will generate petascale data volumes. We summarize some machine learning perspectives on real time data mining, with representative cases of astronomical applications and event detection in high volume datastreams. The first is a "supervised classification" approach currently used for transient event detection at the Very Long Baseline Array (VLBA). It injects known signals of interest - faint single-pulse anomalies - and tunes system parameters to recover these events. This permits meaningful event detection for diverse instrument configurations and observing conditions whose noise cannot be well-characterized in advance. Second, "semi-supervised novelty detection" finds novel events based on statistical deviations from previous patterns. It detects outlier signals of interest while considering known examples of false alarm interference. Applied to data from the Parkes pulsar survey, the approach identifies anomalous "peryton" phenomena that do not match previous event models. Finally, we consider online light curve classification that can trigger adaptive followup measurements of candidate events. Classifier performance analyses suggest optimal survey strategies, and permit principled followup decisions from incomplete data. These examples trace a broad range of algorithm possibilities available for online astronomical data mining. This talk describes research performed at the Jet Propulsion Laboratory, California Institute of Technology. Copyright 2012, All Rights Reserved. U.S. Government support acknowledged.

  16. A scalable moment-closure approximation for large-scale biochemical reaction networks

    PubMed Central

    Kazeroonian, Atefeh; Theis, Fabian J.; Hasenauer, Jan

    2017-01-01

    Abstract Motivation: Stochastic molecular processes are a leading cause of cell-to-cell variability. Their dynamics are often described by continuous-time discrete-state Markov chains and simulated using stochastic simulation algorithms. As these stochastic simulations are computationally demanding, ordinary differential equation models for the dynamics of the statistical moments have been developed. The number of state variables of these approximating models, however, grows at least quadratically with the number of biochemical species. This limits their application to small- and medium-sized processes. Results: In this article, we present a scalable moment-closure approximation (sMA) for the simulation of statistical moments of large-scale stochastic processes. The sMA exploits the structure of the biochemical reaction network to reduce the covariance matrix. We prove that sMA yields approximating models whose number of state variables depends predominantly on local properties, i.e. the average node degree of the reaction network, instead of the overall network size. The resulting complexity reduction is assessed by studying a range of medium- and large-scale biochemical reaction networks. To evaluate the approximation accuracy and the improvement in computational efficiency, we study models for JAK2/STAT5 signalling and NFκB signalling. Our method is applicable to generic biochemical reaction networks and we provide an implementation, including an SBML interface, which renders the sMA easily accessible. Availability and implementation: The sMA is implemented in the open-source MATLAB toolbox CERENA and is available from https://github.com/CERENADevelopers/CERENA. Contact: jan.hasenauer@helmholtz-muenchen.de or atefeh.kazeroonian@tum.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881983

  17. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    NASA Astrophysics Data System (ADS)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  18. Processing data base information having nonwhite noise

    DOEpatents

    Gross, Kenneth C.; Morreale, Patricia

    1995-01-01

    A method and system for processing a set of data from an industrial process and/or a sensor. The method and system can include processing data from either real or calculated data related to an industrial process variable. One of the data sets can be an artificial signal data set generated by an autoregressive moving average technique. After obtaining two data sets associated with one physical variable, a difference function data set is obtained by determining the arithmetic difference between the two pairs of data sets over time. A frequency domain transformation is made of the difference function data set to obtain Fourier modes describing a composite function data set. A residual function data set is obtained by subtracting the composite function data set from the difference function data set and the residual function data set (free of nonwhite noise) is analyzed by a statistical probability ratio test to provide a validated data base.

  19. Characterization of the Ionospheric Scintillations at High Latitude using GPS Signal

    NASA Astrophysics Data System (ADS)

    Mezaoui, H.; Hamza, A. M.; Jayachandran, P. T.

    2013-12-01

    Transionospheric radio signals experience both amplitude and phase variations as a result of propagation through a turbulent ionosphere; this phenomenon is known as ionospheric scintillations. As a result of these fluctuations, Global Positioning System (GPS) receivers lose track of signals and consequently induce position and navigational errors. Therefore, there is a need to study these scintillations and their causes in order to not only resolve the navigational problem but in addition develop analytical and numerical radio propagation models. In order to quantify and qualify these scintillations, we analyze the probability distribution functions (PDFs) of L1 GPS signals at 50 Hz sampling rate using the Canadian High arctic Ionospheric Network (CHAIN) measurements. The raw GPS signal is detrended using a wavelet-based technique and the detrended amplitude and phase of the signal are used to construct probability distribution functions (PDFs) of the scintillating signal. The resulting PDFs are non-Gaussian. From the PDF functional fits, the moments are estimated. The results reveal a general non-trivial parabolic relationship between the normalized fourth and third moments for both the phase and amplitude of the signal. The calculated higher-order moments of the amplitude and phase distribution functions will help quantify some of the scintillation characteristics and in the process provide a base for forecasting, i.e. develop a scintillation climatology model. This statistical analysis, including power spectra, along with a numerical simulation will constitute the backbone of a high latitude scintillation model.

  20. Neurophysiological Basis of Multi-Scale Entropy of Brain Complexity and Its Relationship With Functional Connectivity.

    PubMed

    Wang, Danny J J; Jann, Kay; Fan, Chang; Qiao, Yang; Zang, Yu-Feng; Lu, Hanbing; Yang, Yihong

    2018-01-01

    Recently, non-linear statistical measures such as multi-scale entropy (MSE) have been introduced as indices of the complexity of electrophysiology and fMRI time-series across multiple time scales. In this work, we investigated the neurophysiological underpinnings of complexity (MSE) of electrophysiology and fMRI signals and their relations to functional connectivity (FC). MSE and FC analyses were performed on simulated data using neural mass model based brain network model with the Brain Dynamics Toolbox, on animal models with concurrent recording of fMRI and electrophysiology in conjunction with pharmacological manipulations, and on resting-state fMRI data from the Human Connectome Project. Our results show that the complexity of regional electrophysiology and fMRI signals is positively correlated with network FC. The associations between MSE and FC are dependent on the temporal scales or frequencies, with higher associations between MSE and FC at lower temporal frequencies. Our results from theoretical modeling, animal experiment and human fMRI indicate that (1) Regional neural complexity and network FC may be two related aspects of brain's information processing: the more complex regional neural activity, the higher FC this region has with other brain regions; (2) MSE at high and low frequencies may represent local and distributed information processing across brain regions. Based on literature and our data, we propose that the complexity of regional neural signals may serve as an index of the brain's capacity of information processing-increased complexity may indicate greater transition or exploration between different states of brain networks, thereby a greater propensity for information processing.

  1. Ship detection using STFT sea background statistical modeling for large-scale oceansat remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan

    2018-03-01

    Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.

  2. Cardiovascular oscillations at the bedside: early diagnosis of neonatal sepsis using heart rate characteristics monitoring

    PubMed Central

    Moorman, J. Randall; Delos, John B.; Flower, Abigail A.; Cao, Hanqing; Kovatchev, Boris P.; Richman, Joshua S.; Lake, Douglas E.

    2014-01-01

    We have applied principles of statistical signal processing and non-linear dynamics to analyze heart rate time series from premature newborn infants in order to assist in the early diagnosis of sepsis, a common and potentially deadly bacterial infection of the bloodstream. We began with the observation of reduced variability and transient decelerations in heart rate interval time series for hours up to days prior to clinical signs of illness. We find that measurements of standard deviation, sample asymmetry and sample entropy are highly related to imminent clinical illness. We developed multivariable statistical predictive models, and an interface to display the real-time results to clinicians. Using this approach, we have observed numerous cases in which incipient neonatal sepsis was diagnosed and treated without any clinical illness at all. This review focuses on the mathematical and statistical time series approaches used to detect these abnormal heart rate characteristics and present predictive monitoring information to the clinician. PMID:22026974

  3. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part III: Application to statistical modal analysis

    NASA Astrophysics Data System (ADS)

    Yan, Wang-Ji; Ren, Wei-Xin

    2018-01-01

    This study applies the theoretical findings of circularly-symmetric complex normal ratio distribution Yan and Ren (2016) [1,2] to transmissibility-based modal analysis from a statistical viewpoint. A probabilistic model of transmissibility function in the vicinity of the resonant frequency is formulated in modal domain, while some insightful comments are offered. It theoretically reveals that the statistics of transmissibility function around the resonant frequency is solely dependent on 'noise-to-signal' ratio and mode shapes. As a sequel to the development of the probabilistic model of transmissibility function in modal domain, this study poses the process of modal identification in the context of Bayesian framework by borrowing a novel paradigm. Implementation issues unique to the proposed approach are resolved by Lagrange multiplier approach. Also, this study explores the possibility of applying Bayesian analysis in distinguishing harmonic components and structural ones. The approaches are verified through simulated data and experimentally testing data. The uncertainty behavior due to variation of different factors is also discussed in detail.

  4. Advanced Land Imager Assessment System

    NASA Technical Reports Server (NTRS)

    Chander, Gyanesh; Choate, Mike; Christopherson, Jon; Hollaren, Doug; Morfitt, Ron; Nelson, Jim; Nelson, Shar; Storey, James; Helder, Dennis; Ruggles, Tim; hide

    2008-01-01

    The Advanced Land Imager Assessment System (ALIAS) supports radiometric and geometric image processing for the Advanced Land Imager (ALI) instrument onboard NASA s Earth Observing-1 (EO-1) satellite. ALIAS consists of two processing subsystems for radiometric and geometric processing of the ALI s multispectral imagery. The radiometric processing subsystem characterizes and corrects, where possible, radiometric qualities including: coherent, impulse; and random noise; signal-to-noise ratios (SNRs); detector operability; gain; bias; saturation levels; striping and banding; and the stability of detector performance. The geometric processing subsystem and analysis capabilities support sensor alignment calibrations, sensor chip assembly (SCA)-to-SCA alignments and band-to-band alignment; and perform geodetic accuracy assessments, modulation transfer function (MTF) characterizations, and image-to-image characterizations. ALIAS also characterizes and corrects band-toband registration, and performs systematic precision and terrain correction of ALI images. This system can geometrically correct, and automatically mosaic, the SCA image strips into a seamless, map-projected image. This system provides a large database, which enables bulk trending for all ALI image data and significant instrument telemetry. Bulk trending consists of two functions: Housekeeping Processing and Bulk Radiometric Processing. The Housekeeping function pulls telemetry and temperature information from the instrument housekeeping files and writes this information to a database for trending. The Bulk Radiometric Processing function writes statistical information from the dark data acquired before and after the Earth imagery and the lamp data to the database for trending. This allows for multi-scene statistical analyses.

  5. Blind estimation of reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.

    2003-11-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  6. Online estimation of room reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.

    2003-04-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  7. FPGA-Based Front-End Electronics for Positron Emission Tomography

    PubMed Central

    Haselman, Michael; DeWitt, Don; McDougald, Wendy; Lewellen, Thomas K.; Miyaoka, Robert; Hauck, Scott

    2010-01-01

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates above 100MHz. This combined with FPGA’s low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for positron emission tomography (PET) scanners. Our laboratory is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this next generation scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper two such processes, sub-clock rate pulse timing and event localization, will be discussed in detail. We show that timing performed in the FPGA can achieve a resolution that is suitable for small-animal scanners, and will outperform the analog version given a low enough sampling period for the ADC. We will also show that the position of events in the scanner can be determined in real time using a statistical positioning based algorithm. PMID:21961085

  8. An image-processing method to detect sub-optical features based on understanding noise in intensity measurements.

    PubMed

    Bhatia, Tripta

    2018-07-01

    Accurate quantitative analysis of image data requires that we distinguish between fluorescence intensity (true signal) and the noise inherent to its measurements to the extent possible. We image multilamellar membrane tubes and beads that grow from defects in the fluid lamellar phase of the lipid 1,2-dioleoyl-sn-glycero-3-phosphocholine dissolved in water and water-glycerol mixtures by using fluorescence confocal polarizing microscope. We quantify image noise and determine the noise statistics. Understanding the nature of image noise also helps in optimizing image processing to detect sub-optical features, which would otherwise remain hidden. We use an image-processing technique "optimum smoothening" to improve the signal-to-noise ratio of features of interest without smearing their structural details. A high SNR renders desired positional accuracy with which it is possible to resolve features of interest with width below optical resolution. Using optimum smoothening, the smallest and the largest core diameter detected is of width [Formula: see text] and [Formula: see text] nm, respectively, discussed in this paper. The image-processing and analysis techniques and the noise modeling discussed in this paper can be used for detailed morphological analysis of features down to sub-optical length scales that are obtained by any kind of fluorescence intensity imaging in the raster mode.

  9. Estimation of the Scatterer Distribution of the Cirrhotic Liver using Ultrasonic Image

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tadashi; Hachiya, Hiroyuki

    1998-05-01

    In the B-mode image of the liver obtained by an ultrasonic imaging system, the speckled pattern changes with the progression of the disease such as liver cirrhosis.In this paper we present the statistical characteristics of the echo envelope of the liver, and the technique to extract information of the scatterer distribution from the normal and cirrhotic liver images using constant false alarm rate (CFAR) processing.We analyze the relationship between the extracted scatterer distribution and the stage of liver cirrhosis. The ratio of the area in which the amplitude of the processing signal is more than the threshold to the entire processed image area is related quantitatively to the stage of liver cirrhosis.It is found that the proposed technique is valid for the quantitative diagnosis of liver cirrhosis.

  10. Digital analyzer for point processes based on first-in-first-out memories

    NASA Astrophysics Data System (ADS)

    Basano, Lorenzo; Ottonello, Pasquale; Schiavi, Enore

    1992-06-01

    We present an entirely new version of a multipurpose instrument designed for the statistical analysis of point processes, especially those characterized by high bunching. A long sequence of pulses can be recorded in the RAM bank of a personal computer via a suitably designed front end which employs a pair of first-in-first-out (FIFO) memories; these allow one to build an analyzer that, besides being simpler from the electronic point of view, is capable of sustaining much higher intensity fluctuations of the point process. The overflow risk of the device is evaluated by treating the FIFO pair as a queueing system. The apparatus was tested using both a deterministic signal and a sequence of photoelectrons obtained from laser light scattered by random surfaces.

  11. EEG artifact removal-state-of-the-art and guidelines.

    PubMed

    Urigüen, Jose Antonio; Garcia-Zapirain, Begoña

    2015-06-01

    This paper presents an extensive review on the artifact removal algorithms used to remove the main sources of interference encountered in the electroencephalogram (EEG), specifically ocular, muscular and cardiac artifacts. We first introduce background knowledge on the characteristics of EEG activity, of the artifacts and of the EEG measurement model. Then, we present algorithms commonly employed in the literature and describe their key features. Lastly, principally on the basis of the results provided by various researchers, but also supported by our own experience, we compare the state-of-the-art methods in terms of reported performance, and provide guidelines on how to choose a suitable artifact removal algorithm for a given scenario. With this review we have concluded that, without prior knowledge of the recorded EEG signal or the contaminants, the safest approach is to correct the measured EEG using independent component analysis-to be precise, an algorithm based on second-order statistics such as second-order blind identification (SOBI). Other effective alternatives include extended information maximization (InfoMax) and an adaptive mixture of independent component analyzers (AMICA), based on higher order statistics. All of these algorithms have proved particularly effective with simulations and, more importantly, with data collected in controlled recording conditions. Moreover, whenever prior knowledge is available, then a constrained form of the chosen method should be used in order to incorporate such additional information. Finally, since which algorithm is the best performing is highly dependent on the type of the EEG signal, the artifacts and the signal to contaminant ratio, we believe that the optimal method for removing artifacts from the EEG consists in combining more than one algorithm to correct the signal using multiple processing stages, even though this is an option largely unexplored by researchers in the area.

  12. Relating resting-state fMRI and EEG whole-brain connectomes across frequency bands.

    PubMed

    Deligianni, Fani; Centeno, Maria; Carmichael, David W; Clayden, Jonathan D

    2014-01-01

    Whole brain functional connectomes hold promise for understanding human brain activity across a range of cognitive, developmental and pathological states. So called resting-state (rs) functional MRI studies have contributed to the brain being considered at a macroscopic scale as a set of interacting regions. Interactions are defined as correlation-based signal measurements driven by blood oxygenation level dependent (BOLD) contrast. Understanding the neurophysiological basis of these measurements is important in conveying useful information about brain function. Local coupling between BOLD fMRI and neurophysiological measurements is relatively well defined, with evidence that gamma (range) frequency EEG signals are the closest correlate of BOLD fMRI changes during cognitive processing. However, it is less clear how whole-brain network interactions relate during rest where lower frequency signals have been suggested to play a key role. Simultaneous EEG-fMRI offers the opportunity to observe brain network dynamics with high spatio-temporal resolution. We utilize these measurements to compare the connectomes derived from rs-fMRI and EEG band limited power (BLP). Merging this multi-modal information requires the development of an appropriate statistical framework. We relate the covariance matrices of the Hilbert envelope of the source localized EEG signal across bands to the covariance matrices derived from rs-fMRI with the means of statistical prediction based on sparse Canonical Correlation Analysis (sCCA). Subsequently, we identify the most prominent connections that contribute to this relationship. We compare whole-brain functional connectomes based on their geodesic distance to reliably estimate the performance of the prediction. The performance of predicting fMRI from EEG connectomes is considerably better than predicting EEG from fMRI across all bands, whereas the connectomes derived in low frequency EEG bands resemble best rs-fMRI connectivity.

  13. Relating resting-state fMRI and EEG whole-brain connectomes across frequency bands

    PubMed Central

    Deligianni, Fani; Centeno, Maria; Carmichael, David W.; Clayden, Jonathan D.

    2014-01-01

    Whole brain functional connectomes hold promise for understanding human brain activity across a range of cognitive, developmental and pathological states. So called resting-state (rs) functional MRI studies have contributed to the brain being considered at a macroscopic scale as a set of interacting regions. Interactions are defined as correlation-based signal measurements driven by blood oxygenation level dependent (BOLD) contrast. Understanding the neurophysiological basis of these measurements is important in conveying useful information about brain function. Local coupling between BOLD fMRI and neurophysiological measurements is relatively well defined, with evidence that gamma (range) frequency EEG signals are the closest correlate of BOLD fMRI changes during cognitive processing. However, it is less clear how whole-brain network interactions relate during rest where lower frequency signals have been suggested to play a key role. Simultaneous EEG-fMRI offers the opportunity to observe brain network dynamics with high spatio-temporal resolution. We utilize these measurements to compare the connectomes derived from rs-fMRI and EEG band limited power (BLP). Merging this multi-modal information requires the development of an appropriate statistical framework. We relate the covariance matrices of the Hilbert envelope of the source localized EEG signal across bands to the covariance matrices derived from rs-fMRI with the means of statistical prediction based on sparse Canonical Correlation Analysis (sCCA). Subsequently, we identify the most prominent connections that contribute to this relationship. We compare whole-brain functional connectomes based on their geodesic distance to reliably estimate the performance of the prediction. The performance of predicting fMRI from EEG connectomes is considerably better than predicting EEG from fMRI across all bands, whereas the connectomes derived in low frequency EEG bands resemble best rs-fMRI connectivity. PMID:25221467

  14. Statistical analysis of lightning electric field measured under Malaysian condition

    NASA Astrophysics Data System (ADS)

    Salimi, Behnam; Mehranzamir, Kamyar; Abdul-Malek, Zulkurnain

    2014-02-01

    Lightning is an electrical discharge during thunderstorms that can be either within clouds (Inter-Cloud), or between clouds and ground (Cloud-Ground). The Lightning characteristics and their statistical information are the foundation for the design of lightning protection system as well as for the calculation of lightning radiated fields. Nowadays, there are various techniques to detect lightning signals and to determine various parameters produced by a lightning flash. Each technique provides its own claimed performances. In this paper, the characteristics of captured broadband electric fields generated by cloud-to-ground lightning discharges in South of Malaysia are analyzed. A total of 130 cloud-to-ground lightning flashes from 3 separate thunderstorm events (each event lasts for about 4-5 hours) were examined. Statistical analyses of the following signal parameters were presented: preliminary breakdown pulse train time duration, time interval between preliminary breakdowns and return stroke, multiplicity of stroke, and percentages of single stroke only. The BIL model is also introduced to characterize the lightning signature patterns. Observations on the statistical analyses show that about 79% of lightning signals fit well with the BIL model. The maximum and minimum of preliminary breakdown time duration of the observed lightning signals are 84 ms and 560 us, respectively. The findings of the statistical results show that 7.6% of the flashes were single stroke flashes, and the maximum number of strokes recorded was 14 multiple strokes per flash. A preliminary breakdown signature in more than 95% of the flashes can be identified.

  15. PSGMiner: A modular software for polysomnographic analysis.

    PubMed

    Umut, İlhan

    2016-06-01

    Sleep disorders affect a great percentage of the population. The diagnosis of these disorders is usually made by polysomnography. This paper details the development of new software to carry out feature extraction in order to perform robust analysis and classification of sleep events using polysomnographic data. The software, called PSGMiner, is a tool, which visualizes, processes and classifies bioelectrical data. The purpose of this program is to provide researchers with a platform with which to test new hypotheses by creating tests to check for correlations that are not available in commercially available software. The software is freely available under the GPL3 License. PSGMiner is composed of a number of diverse modules such as feature extraction, annotation, and machine learning modules, all of which are accessible from the main module. Using the software, it is possible to extract features of polysomnography using digital signal processing and statistical methods and to perform different analyses. The features can be classified through the use of five classification algorithms. PSGMiner offers an architecture designed for integrating new methods. Automatic scoring, which is available in almost all commercial PSG software, is not inherently available in this program, though it can be implemented by two different methodologies (machine learning and algorithms). While similar software focuses on a certain signal or event composed of a small number of modules with no expansion possibility, the software introduced here can handle all polysomnographic signals and events. The software simplifies the processing of polysomnographic signals for researchers and physicians that are not experts in computer programming. It can find correlations between different events which could help predict an oncoming event such as sleep apnea. The software could also be used for educational purposes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Signal processing and neural network toolbox and its application to failure diagnosis and prognosis

    NASA Astrophysics Data System (ADS)

    Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.

    2001-07-01

    Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.

  17. Physiological correlates of comodulation masking release in the mammalian ventral cochlear nucleus.

    PubMed

    Pressnitzer, D; Meddis, R; Delahaye, R; Winter, I M

    2001-08-15

    Comodulation masking release (CMR) enhances the detection of signals embedded in wideband, amplitude-modulated maskers. At least part of the CMR is attributable to across-frequency processing, however, the relative contribution of different stages in the auditory system to across-frequency processing is unknown. We have measured the responses of single units from one of the earliest stages in the ascending auditory pathway, the ventral cochlear nucleus, where across frequency processing may take place. A sinusoidally amplitude-modulated tone at the best frequency of each unit was used as a masker. A pure tone signal was added in the dips of the masker modulation (reference condition). Flanking components (FCs) were then added at frequencies remote from the unit best frequency. The FCs were pure tones amplitude modulated either in phase (comodulated) or out of phase (codeviant) with the on-frequency component. Psychophysically, this CMR paradigm reduces within-channel cues while producing an advantage of approximately 10 dB for the comodulated condition in comparison with the reference condition. Some of the recorded units showed responses consistent with perceptual CMR. The addition of the comodulated FCs produced a strong reduction in the response to the masker modulation, making the signal more salient in the poststimulus time histograms. A decision statistic based on d' showed that threshold was reached at lower signal levels for the comodulated condition than for reference or codeviant conditions. The neurons that exhibited such a behavior were mainly transient chopper or primary-like units. The results obtained from a subpopulation of transient chopper units are consistent with a possible circuit in the cochlear nucleus consisting of a wideband inhibitor contacting a narrowband cell. A computational model was used to confirm the feasibility of such a circuit.

  18. Multiscale image processing and antiscatter grids in digital radiography.

    PubMed

    Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D

    2009-01-01

    Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.

  19. MEG and EEG data analysis with MNE-Python.

    PubMed

    Gramfort, Alexandre; Luessi, Martin; Larson, Eric; Engemann, Denis A; Strohmeier, Daniel; Brodbeck, Christian; Goj, Roman; Jas, Mainak; Brooks, Teon; Parkkonen, Lauri; Hämäläinen, Matti

    2013-12-26

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne.

  20. Auto- and cross-power spectral analysis of dual trap optical tweezer experiments using Bayesian inference.

    PubMed

    von Hansen, Yann; Mehlich, Alexander; Pelz, Benjamin; Rief, Matthias; Netz, Roland R

    2012-09-01

    The thermal fluctuations of micron-sized beads in dual trap optical tweezer experiments contain complete dynamic information about the viscoelastic properties of the embedding medium and-if present-macromolecular constructs connecting the two beads. To quantitatively interpret the spectral properties of the measured signals, a detailed understanding of the instrumental characteristics is required. To this end, we present a theoretical description of the signal processing in a typical dual trap optical tweezer experiment accounting for polarization crosstalk and instrumental noise and discuss the effect of finite statistics. To infer the unknown parameters from experimental data, a maximum likelihood method based on the statistical properties of the stochastic signals is derived. In a first step, the method can be used for calibration purposes: We propose a scheme involving three consecutive measurements (both traps empty, first one occupied and second empty, and vice versa), by which all instrumental and physical parameters of the setup are determined. We test our approach for a simple model system, namely a pair of unconnected, but hydrodynamically interacting spheres. The comparison to theoretical predictions based on instantaneous as well as retarded hydrodynamics emphasizes the importance of hydrodynamic retardation effects due to vorticity diffusion in the fluid. For more complex experimental scenarios, where macromolecular constructs are tethered between the two beads, the same maximum likelihood method in conjunction with dynamic deconvolution theory will in a second step allow one to determine the viscoelastic properties of the tethered element connecting the two beads.

Top