Sample records for signal validation methods

  1. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods.

    PubMed

    Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J

    2018-05-17

    The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

  2. Concurrent fNIRS-fMRI measurement to validate a method for separating deep and shallow fNIRS signals by using multidistance optodes

    PubMed Central

    Funane, Tsukasa; Sato, Hiroki; Yahata, Noriaki; Takizawa, Ryu; Nishimura, Yukika; Kinoshita, Akihide; Katura, Takusige; Atsumori, Hirokazu; Fukuda, Masato; Kasai, Kiyoto; Koizumi, Hideaki; Kiguchi, Masashi

    2015-01-01

    Abstract. It has been reported that a functional near-infrared spectroscopy (fNIRS) signal can be contaminated by extracerebral contributions. Many algorithms using multidistance separations to address this issue have been proposed, but their spatial separation performance has rarely been validated with simultaneous measurements of fNIRS and functional magnetic resonance imaging (fMRI). We previously proposed a method for discriminating between deep and shallow contributions in fNIRS signals, referred to as the multidistance independent component analysis (MD-ICA) method. In this study, to validate the MD-ICA method from the spatial aspect, multidistance fNIRS, fMRI, and laser-Doppler-flowmetry signals were simultaneously obtained for 12 healthy adult males during three tasks. The fNIRS signal was separated into deep and shallow signals by using the MD-ICA method, and the correlation between the waveforms of the separated fNIRS signals and the gray matter blood oxygenation level–dependent signals was analyzed. A three-way analysis of variance (signal depth×Hb kind×task) indicated that the main effect of fNIRS signal depth on the correlation is significant [F(1,1286)=5.34, p<0.05]. This result indicates that the MD-ICA method successfully separates fNIRS signals into spatially deep and shallow signals, and the accuracy and reliability of the fNIRS signal will be improved with the method. PMID:26157983

  3. Sampling Frequency Optimisation and Nonlinear Distortion Mitigation in Subsampling Receiver

    NASA Astrophysics Data System (ADS)

    Castanheira, Pedro Xavier Melo Fernandes

    Subsampling receivers utilise the subsampling method to down convert signals from radio frequency (RF) to a lower frequency location. Multiple signals can also be down converted using the subsampling receiver, but using the incorrect subsampling frequency could result in signals aliasing one another after down conversion. The existing method for subsampling multiband signals focused on down converting all the signals without any aliasing between the signals. The case considered initially was a dual band signal, and then it was further extended to a more general multiband case. In this thesis, a new method is proposed with the assumption that only one signal is needed to not overlap the other multiband signals that are down converted at the same time. The proposed method will introduce unique formulas using the said assumption to calculate the valid subsampling frequencies, ensuring that the target signal is not aliased by the other signals. Simulation results show that the proposed method will provide lower valid subsampling frequencies for down conversion compared to the existing methods.

  4. A Deep Machine Learning Method for Classifying Cyclic Time Series of Biological Signals Using Time-Growing Neural Network.

    PubMed

    Gharehbaghi, Arash; Linden, Maria

    2017-10-12

    This paper presents a novel method for learning the cyclic contents of stochastic time series: the deep time-growing neural network (DTGNN). The DTGNN combines supervised and unsupervised methods in different levels of learning for an enhanced performance. It is employed by a multiscale learning structure to classify cyclic time series (CTS), in which the dynamic contents of the time series are preserved in an efficient manner. This paper suggests a systematic procedure for finding the design parameter of the classification method for a one-versus-multiple class application. A novel validation method is also suggested for evaluating the structural risk, both in a quantitative and a qualitative manner. The effect of the DTGNN on the performance of the classifier is statistically validated through the repeated random subsampling using different sets of CTS, from different medical applications. The validation involves four medical databases, comprised of 108 recordings of the electroencephalogram signal, 90 recordings of the electromyogram signal, 130 recordings of the heart sound signal, and 50 recordings of the respiratory sound signal. Results of the statistical validations show that the DTGNN significantly improves the performance of the classification and also exhibits an optimal structural risk.

  5. [A method to estimate the short-term fractal dimension of heart rate variability based on wavelet transform].

    PubMed

    Zhonggang, Liang; Hong, Yan

    2006-10-01

    A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.

  6. Development and validation of an ionic chromatography method for the determination of nitrate, nitrite and chloride in meat.

    PubMed

    Lopez-Moreno, Cristina; Perez, Isabel Viera; Urbano, Ana M

    2016-03-01

    The purpose of this study is to develop the validation of a method for the analysis of certain preservatives in meat and to obtain a suitable Certified Reference Material (CRM) to achieve this task. The preservatives studied were NO3(-), NO2(-) and Cl(-) as they serve as important antimicrobial agents in meat to inhibit the growth of bacteria spoilage. The meat samples were prepared using a treatment that allowed the production of a known CRM concentration that is highly homogeneous and stable in time. The matrix effects were also studied to evaluate the influence on the analytical signal for the ions of interest, showing that the matrix influence does not affect the final result. An assessment of the signal variation in time was carried out for the ions. In this regard, although the chloride and nitrate signal remained stable for the duration of the study, the nitrite signal decreased appreciably with time. A mathematical treatment of the data gave a stable nitrite signal, obtaining a method suitable for the validation of these anions in meat. A statistical study was needed for the validation of the method, where the precision, accuracy, uncertainty and other mathematical parameters were evaluated obtaining satisfactory results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Transient signal isotope analysis: validation of the method for isotope signal synchronization with the determination of amplifier first-order time constants.

    PubMed

    Gourgiotis, Alkiviadis; Manhès, Gérard; Louvat, Pascale; Moureau, Julien; Gaillardet, Jérôme

    2015-09-30

    During transient signal acquisition by Multi-Collection Inductively Coupled Plasma Mass Spectrometry (MC-ICPMS), an isotope ratio increase or decrease (isotopic drift hereafter) is often observed which is related to the different time responses of the amplifiers involved in multi-collection. This isotopic drift affects the quality of the isotopic data and, in a recent study, a method of internal amplifier signal synchronization for isotope drift correction was proposed. In this work the determination of the amplifier time constants was investigated in order to validate the method of internal amplifier signal synchronization for isotope ratio drift correction. Two different MC-ICPMS instruments, the Neptune and the Neptune Plus, were used, and both the lead transient signals and the signal decay curves of the amplifiers were investigated. Our results show that the first part of the amplifier signal decay curve is characterized by a pure exponential decay. This part of the signal decay was used for the effective calculation of the amplifier first-order time constants. The small differences between these time constants were compared with time lag values obtained from the method of isotope signal synchronization and were found to be in good agreement. This work proposes a way of determining amplifier first-order time constants. We show that isotopic drift is directly related to the amplifier first-order time constants and the method of internal amplifier signal synchronization for isotope ratio drift correction is validated. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  9. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  10. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

    ERIC Educational Resources Information Center

    Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.

    2018-01-01

    Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…

  11. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  12. Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.

    PubMed

    Maryn, Youri; Zarowski, Andrzej

    2015-11-01

    Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.

  13. Usage Autocorrelation Function in the Capacity of Indicator Shape of the Signal in Acoustic Emission Testing of Intricate Castings

    NASA Astrophysics Data System (ADS)

    Popkov, Artem

    2016-01-01

    The article contains information about acoustic emission signals analysing using autocorrelation function. Operation factors were analysed, such as shape of signal, the origins time and carrier frequency. The purpose of work is estimating the validity of correlations methods analysing signals. Acoustic emission signal consist of different types of waves, which propagate on different trajectories in object of control. Acoustic emission signal is amplitude-, phase- and frequency-modeling signal. It was described by carrier frequency at a given point of time. Period of signal make up 12.5 microseconds and carrier frequency make up 80 kHz for analysing signal. Usage autocorrelation function like indicator the origin time of acoustic emission signal raises validity localization of emitters.

  14. An improved method based on wavelet coefficient correlation to filter noise in Doppler ultrasound blood flow signals

    NASA Astrophysics Data System (ADS)

    Wan, Renzhi; Zu, Yunxiao; Shao, Lin

    2018-04-01

    The blood echo signal maintained through Medical ultrasound Doppler devices would always include vascular wall pulsation signal .The traditional method to de-noise wall signal is using high-pass filter, which will also remove the lowfrequency part of the blood flow signal. Some scholars put forward a method based on region selective reduction, which at first estimates of the wall pulsation signals and then removes the wall signal from the mixed signal. Apparently, this method uses the correlation between wavelet coefficients to distinguish blood signal from wall signal, but in fact it is a kind of wavelet threshold de-noising method, whose effect is not so much ideal. In order to maintain a better effect, this paper proposes an improved method based on wavelet coefficient correlation to separate blood signal and wall signal, and simulates the algorithm by computer to verify its validity.

  15. Emitter signal separation method based on multi-level digital channelization

    NASA Astrophysics Data System (ADS)

    Han, Xun; Ping, Yifan; Wang, Sujun; Feng, Ying; Kuang, Yin; Yang, Xinquan

    2018-02-01

    To solve the problem of emitter separation under complex electromagnetic environment, a signal separation method based on multi-level digital channelization is proposed in this paper. A two-level structure which can divide signal into different channel is designed first, after that, the peaks of different channels are tracked using the track filter and the coincident signals in time domain are separated in time-frequency domain. Finally, the time domain waveforms of different signals are acquired by reverse transformation. The validness of the proposed method is proved by experiment.

  16. Downhole microseismic signal-to-noise ratio enhancement via strip matching shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Juan; Ji, Shuo; Li, Yue; Qian, Zhihong; Lu, Weili

    2018-04-01

    Shearlet transform has been proved effective in noise attenuation. However, because of the low magnitude and high frequency of downhole microseismic signals, the coefficient values of valid signals and noise are similar in the shearlet domain. As a result, it is hard to suppress the noise. In this paper, we present a novel signal-to-noise ratio enhancement scheme called strip matching shearlet transform. The method takes into account the directivity of microseismic events and shearlets. Through strip matching, the matching degree in direction between them has been promoted. Then the coefficient values of valid signals are much larger than those of the noise. Consequently, we can separate them well with the help of thresholding. The experimental results on both synthetic records and field data illustrate that our proposed method preserves the useful components and attenuates the noise well.

  17. Elementary signaling modes predict the essentiality of signal transduction network components

    PubMed Central

    2011-01-01

    Background Understanding how signals propagate through signaling pathways and networks is a central goal in systems biology. Quantitative dynamic models help to achieve this understanding, but are difficult to construct and validate because of the scarcity of known mechanistic details and kinetic parameters. Structural and qualitative analysis is emerging as a feasible and useful alternative for interpreting signal transduction. Results In this work, we present an integrative computational method for evaluating the essentiality of components in signaling networks. This approach expands an existing signaling network to a richer representation that incorporates the positive or negative nature of interactions and the synergistic behaviors among multiple components. Our method simulates both knockout and constitutive activation of components as node disruptions, and takes into account the possible cascading effects of a node's disruption. We introduce the concept of elementary signaling mode (ESM), as the minimal set of nodes that can perform signal transduction independently. Our method ranks the importance of signaling components by the effects of their perturbation on the ESMs of the network. Validation on several signaling networks describing the immune response of mammals to bacteria, guard cell abscisic acid signaling in plants, and T cell receptor signaling shows that this method can effectively uncover the essentiality of components mediating a signal transduction process and results in strong agreement with the results of Boolean (logical) dynamic models and experimental observations. Conclusions This integrative method is an efficient procedure for exploratory analysis of large signaling and regulatory networks where dynamic modeling or experimental tests are impractical. Its results serve as testable predictions, provide insights into signal transduction and regulatory mechanisms and can guide targeted computational or experimental follow-up studies. The source codes for the algorithms developed in this study can be found at http://www.phys.psu.edu/~ralbert/ESM. PMID:21426566

  18. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    NASA Astrophysics Data System (ADS)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  19. Dimension from covariance matrices.

    PubMed

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  20. A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals

    PubMed Central

    Castañón–Puga, Manuel; Salazar, Abby Stephanie; Aguilar, Leocundo; Gaxiola-Pacheco, Carelia; Licea, Guillermo

    2015-01-01

    The increasing use of mobile devices in indoor spaces brings challenges to location methods. This work presents a hybrid intelligent method based on data mining and Type-2 fuzzy logic to locate mobile devices in an indoor space by zones using Wi-Fi signals from selected access points (APs). This approach takes advantage of wireless local area networks (WLANs) over other types of architectures and implements the complete method in a mobile application using the developed tools. Besides, the proposed approach is validated by experimental data obtained from case studies and the cross-validation technique. For the purpose of generating the fuzzy rules that conform to the Takagi–Sugeno fuzzy system structure, a semi-supervised data mining technique called subtractive clustering is used. This algorithm finds centers of clusters from the radius map given by the collected signals from APs. Measurements of Wi-Fi signals can be noisy due to several factors mentioned in this work, so this method proposed the use of Type-2 fuzzy logic for modeling and dealing with such uncertain information. PMID:26633417

  1. A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals.

    PubMed

    Castañón-Puga, Manuel; Salazar, Abby Stephanie; Aguilar, Leocundo; Gaxiola-Pacheco, Carelia; Licea, Guillermo

    2015-12-02

    The increasing use of mobile devices in indoor spaces brings challenges to location methods. This work presents a hybrid intelligent method based on data mining and Type-2 fuzzy logic to locate mobile devices in an indoor space by zones using Wi-Fi signals from selected access points (APs). This approach takes advantage of wireless local area networks (WLANs) over other types of architectures and implements the complete method in a mobile application using the developed tools. Besides, the proposed approach is validated by experimental data obtained from case studies and the cross-validation technique. For the purpose of generating the fuzzy rules that conform to the Takagi-Sugeno fuzzy system structure, a semi-supervised data mining technique called subtractive clustering is used. This algorithm finds centers of clusters from the radius map given by the collected signals from APs. Measurements of Wi-Fi signals can be noisy due to several factors mentioned in this work, so this method proposed the use of Type-2 fuzzy logic for modeling and dealing with such uncertain information.

  2. Forward ultrasonic model validation using wavefield imaging methods

    NASA Astrophysics Data System (ADS)

    Blackshire, James L.

    2018-04-01

    The validation of forward ultrasonic wave propagation models in a complex titanium polycrystalline material system is accomplished using wavefield imaging methods. An innovative measurement approach is described that permits the visualization and quantitative evaluation of bulk elastic wave propagation and scattering behaviors in the titanium material for a typical focused immersion ultrasound measurement process. Results are provided for the determination and direct comparison of the ultrasonic beam's focal properties, mode-converted shear wave position and angle, and scattering and reflection from millimeter-sized microtexture regions (MTRs) within the titanium material. The approach and results are important with respect to understanding the root-cause backscatter signal responses generated in aerospace engine materials, where model-assisted methods are being used to understand the probabilistic nature of the backscatter signal content. Wavefield imaging methods are shown to be an effective means for corroborating and validating important forward model predictions in a direct manner using time- and spatially-resolved displacement field amplitude measurements.

  3. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures.

    PubMed

    Hegazy, Maha A; Lotfy, Hayam M; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-05

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A long-term validation of the modernised DC-ARC-OES solid-sample method.

    PubMed

    Flórián, K; Hassler, J; Förster, O

    2001-12-01

    The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.

  5. Validating silicon polytrodes with paired juxtacellular recordings: method and dataset

    PubMed Central

    Lopes, Gonçalo; Frazão, João; Nogueira, Joana; Lacerda, Pedro; Baião, Pedro; Aarts, Arno; Andrei, Alexandru; Musa, Silke; Fortunato, Elvira; Barquinha, Pedro; Kampff, Adam R.

    2016-01-01

    Cross-validating new methods for recording neural activity is necessary to accurately interpret and compare the signals they measure. Here we describe a procedure for precisely aligning two probes for in vivo “paired-recordings” such that the spiking activity of a single neuron is monitored with both a dense extracellular silicon polytrode and a juxtacellular micropipette. Our new method allows for efficient, reliable, and automated guidance of both probes to the same neural structure with micrometer resolution. We also describe a new dataset of paired-recordings, which is available online. We propose that our novel targeting system, and ever expanding cross-validation dataset, will be vital to the development of new algorithms for automatically detecting/sorting single-units, characterizing new electrode materials/designs, and resolving nagging questions regarding the origin and nature of extracellular neural signals. PMID:27306671

  6. Accurate prediction of secreted substrates and identification of a conserved putative secretion signal for type III secretion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samudrala, Ram; Heffron, Fred; McDermott, Jason E.

    2009-04-24

    The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, aftermore » eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.« less

  7. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    PubMed

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  8. Validating silicon polytrodes with paired juxtacellular recordings: method and dataset.

    PubMed

    Neto, Joana P; Lopes, Gonçalo; Frazão, João; Nogueira, Joana; Lacerda, Pedro; Baião, Pedro; Aarts, Arno; Andrei, Alexandru; Musa, Silke; Fortunato, Elvira; Barquinha, Pedro; Kampff, Adam R

    2016-08-01

    Cross-validating new methods for recording neural activity is necessary to accurately interpret and compare the signals they measure. Here we describe a procedure for precisely aligning two probes for in vivo "paired-recordings" such that the spiking activity of a single neuron is monitored with both a dense extracellular silicon polytrode and a juxtacellular micropipette. Our new method allows for efficient, reliable, and automated guidance of both probes to the same neural structure with micrometer resolution. We also describe a new dataset of paired-recordings, which is available online. We propose that our novel targeting system, and ever expanding cross-validation dataset, will be vital to the development of new algorithms for automatically detecting/sorting single-units, characterizing new electrode materials/designs, and resolving nagging questions regarding the origin and nature of extracellular neural signals. Copyright © 2016 the American Physiological Society.

  9. Synchronous acquisition of multi-channel signals by single-channel ADC based on square wave modulation

    NASA Astrophysics Data System (ADS)

    Yi, Xiaoqing; Hao, Liling; Jiang, Fangfang; Xu, Lisheng; Song, Shaoxiu; Li, Gang; Lin, Ling

    2017-08-01

    Synchronous acquisition of multi-channel biopotential signals, such as electrocardiograph (ECG) and electroencephalograph, has vital significance in health care and clinical diagnosis. In this paper, we proposed a new method which is using single channel ADC to acquire multi-channel biopotential signals modulated by square waves synchronously. In this method, a specific modulate and demodulate method has been investigated without complex signal processing schemes. For each channel, the sampling rate would not decline with the increase of the number of signal channels. More specifically, the signal-to-noise ratio of each channel is n times of the time-division method or an improvement of 3.01 ×log2n dB, where n represents the number of the signal channels. A numerical simulation shows the feasibility and validity of this method. Besides, a newly developed 8-lead ECG based on the new method has been introduced. These experiments illustrate that the method is practicable and thus is potential for low-cost medical monitors.

  10. An evaluation of HEMT potential for millimeter-wave signal sources using interpolation and harmonic balance techniques

    NASA Technical Reports Server (NTRS)

    Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.

    1991-01-01

    A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).

  11. Extended internal standard method for quantitative 1H NMR assisted by chromatography (EIC) for analyte overlapping impurity on 1H NMR spectra.

    PubMed

    Saito, Naoki; Kitamaki, Yuko; Otsuka, Satoko; Yamanaka, Noriko; Nishizaki, Yuzo; Sugimoto, Naoki; Imura, Hisanori; Ihara, Toshihide

    2018-07-01

    We devised a novel extended internal standard method of quantitative 1 H NMR (qNMR) assisted by chromatography (EIC) that accurately quantifies 1 H signal areas of analytes, even when the chemical shifts of the impurity and analyte signals overlap completely. When impurity and analyte signals overlap in the 1 H NMR spectrum but can be separated in a chromatogram, the response ratio of the impurity and an internal standard (IS) can be obtained from the chromatogram. If the response ratio can be converted into the 1 H signal area ratio of the impurity and the IS, the 1 H signal area of the analyte can be evaluated accurately by mathematically correcting the contributions of the 1 H signal area of the impurity overlapping the analyte in the 1 H NMR spectrum. In this study, gas chromatography and liquid chromatography were used. We used 2-chlorophenol and 4-chlorophenol containing phenol as an impurity as examples in which impurity and analyte signals overlap to validate and demonstrate the EIC, respectively. Because the 1 H signals of 2-chlorophenol and phenol can be separated in specific alkaline solutions, 2-chlorophenol is suitable to validate the EIC by comparing analytical value obtained by the EIC with that by only qNMR under the alkaline condition. By the EIC, the purity of 2-chlorophenol was obtained with a relative expanded uncertainty (k = 2) of 0.24%. The purity matched that obtained under the alkaline condition. Furthermore, the EIC was also validated by evaluating the phenol content with the absolute calibration curve method by gas chromatography. Finally, we demonstrated that the EIC was possible to evaluate the purity of 4-chlorophenol, with a relative expanded uncertainty (k = 2) of 0.22%, which was not able to be separated from the 1 H signal of phenol under any condition. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Laser velocimetry with fluorescent dye-doped polystyrene microspheres.

    PubMed

    Lowe, K Todd; Maisto, Pietro; Byun, Gwibo; Simpson, Roger L; Verkamp, Max; Danehy, Paul M; Tiemsin, Pacita I; Wohl, Christopher J

    2013-04-15

    Simultaneous Mie scattering and laser-induced fluorescence (LIF) signals are obtained from individual polystyrene latex microspheres dispersed in an air flow. Microspheres less than 1 μm mean diameter were doped with two organic fluorescent dyes, Rhodamine B (RhB) and dichlorofluorescein (DCF), intended either to provide improved particle-based flow velocimetry in the vicinity of surfaces or to provide scalar flow information (e.g., marking one of two fluid streams). Both dyes exhibit measureable fluorescence signals that are on the order of 10(-3) to 10(-4) times weaker than the simultaneously measured Mie signals. It is determined that at the conditions measured, 95.5% of RhB LIF signals and 32.2% of DCF signals provide valid laser-Doppler velocimetry measurements compared with the Mie scattering validation rate with 6.5 W of 532 nm excitation, while RhB excited with 1.0 W incident laser power still exhibits 95.4% valid velocimetry signals from the LIF channel. The results suggest that the method is applicable to wind tunnel measurements near walls where laser flare can be a limiting factor and monodisperse particles are essential.

  13. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals

    PubMed Central

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610

  14. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.

    PubMed

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.

  15. Flight instrument and telemetry response and its inversion

    NASA Technical Reports Server (NTRS)

    Weinberger, M. R.

    1971-01-01

    Mathematical models of rate gyros, servo accelerometers, pressure transducers, and telemetry systems were derived and their parameters were obtained from laboratory tests. Analog computer simulations were used extensively for verification of the validity for fast and large input signals. An optimal inversion method was derived to reconstruct input signals from noisy output signals and a computer program was prepared.

  16. Accurate derivation of heart rate variability signal for detection of sleep disordered breathing in children.

    PubMed

    Chatlapalli, S; Nazeran, H; Melarkod, V; Krishnam, R; Estrada, E; Pamula, Y; Cabrera, S

    2004-01-01

    The electrocardiogram (ECG) signal is used extensively as a low cost diagnostic tool to provide information concerning the heart's state of health. Accurate determination of the QRS complex, in particular, reliable detection of the R wave peak, is essential in computer based ECG analysis. ECG data from Physionet's Sleep-Apnea database were used to develop, test, and validate a robust heart rate variability (HRV) signal derivation algorithm. The HRV signal was derived from pre-processed ECG signals by developing an enhanced Hilbert transform (EHT) algorithm with built-in missing beat detection capability for reliable QRS detection. The performance of the EHT algorithm was then compared against that of a popular Hilbert transform-based (HT) QRS detection algorithm. Autoregressive (AR) modeling of the HRV power spectrum for both EHT- and HT-derived HRV signals was achieved and different parameters from their power spectra as well as approximate entropy were derived for comparison. Poincare plots were then used as a visualization tool to highlight the detection of the missing beats in the EHT method After validation of the EHT algorithm on ECG data from the Physionet, the algorithm was further tested and validated on a dataset obtained from children undergoing polysomnography for detection of sleep disordered breathing (SDB). Sensitive measures of accurate HRV signals were then derived to be used in detecting and diagnosing sleep disordered breathing in children. All signal processing algorithms were implemented in MATLAB. We present a description of the EHT algorithm and analyze pilot data for eight children undergoing nocturnal polysomnography. The pilot data demonstrated that the EHT method provides an accurate way of deriving the HRV signal and plays an important role in extraction of reliable measures to distinguish between periods of normal and sleep disordered breathing (SDB) in children.

  17. An augmented classical least squares method for quantitative Raman spectral analysis against component information loss.

    PubMed

    Zhou, Yan; Cao, Hui

    2013-01-01

    We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.

  18. Evaluation of Simultaneous Multisine Excitation of the Joined Wing SensorCraft Aeroelastic Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Morelli, Eugene A.

    2011-01-01

    Multiple mutually orthogonal signals comprise excitation data sets for aeroservoelastic system identification. A multisine signal is a sum of harmonic sinusoid components. A set of these signals is made orthogonal by distribution of the frequency content such that each signal contains unique frequencies. This research extends the range of application of an excitation method developed for stability and control flight testing to aeroservoelastic modeling from wind tunnel testing. Wind tunnel data for the Joined Wing SensorCraft model validates this method, demonstrating that these signals applied simultaneously reproduce the frequency response estimates achieved from one-at-a-time excitation.

  19. DOA Finding with Support Vector Regression Based Forward-Backward Linear Prediction.

    PubMed

    Pan, Jingjing; Wang, Yide; Le Bastard, Cédric; Wang, Tianzhen

    2017-05-27

    Direction-of-arrival (DOA) estimation has drawn considerable attention in array signal processing, particularly with coherent signals and a limited number of snapshots. Forward-backward linear prediction (FBLP) is able to directly deal with coherent signals. Support vector regression (SVR) is robust with small samples. This paper proposes the combination of the advantages of FBLP and SVR in the estimation of DOAs of coherent incoming signals with low snapshots. The performance of the proposed method is validated with numerical simulations in coherent scenarios, in terms of different angle separations, numbers of snapshots, and signal-to-noise ratios (SNRs). Simulation results show the effectiveness of the proposed method.

  20. Determining Tidal Phase Differences from X-Band Radar Images

    NASA Astrophysics Data System (ADS)

    Newman, Kieran; Bell, Paul; Brown, Jennifer; Plater, Andrew

    2017-04-01

    Introduction Previous work by Bell et. al. (2016) has developed a method using X-band marine radar to measure intertidal bathymetry, using the waterline as a level over a spring-neap tidal cycle. This has been used in the Dee Estuary to give a good representation of the bathymetry in the area. However, there are some sources of inaccuracy in the method, as a uniform spatial tidal signal is assumed over the entire domain. Motivation The method used by Bell et. al. (2016) applies a spatially uniform tidal signal to the entire domain. This fails to account for fine-scale variations in water level and tidal phase. While methods are being developed to account for small-scale water level variations using high resolution modelling, a method to determine tidal phase variations directly from the radar intensity images could be advantageous operationally. Methods The tidal phase has been computed using two different methods, with hourly averaged images from 2008. In the first method, the cross-correlation between each raw pixel time series and a tidal signal at a number of lags is calculated, and the lag with the highest correlation to the pixel series is recorded. For the second method, the same method of correlation is used on signals generated by tracking movement of buoys, which show up strongly in the radar image as they move on their moorings with the tidal currents. There is a broad agreement between the two methods, but validation is needed to determine the relative accuracy. The phase has also been calculated using a Fourier decomposition, and agrees broadly with the above methods. Work also needs to be done to separate areas where the recorded phase is due to tidal current (mostly subtidal areas) or due to elevation (mostly the wetting/drying signal in intertidal areas), by classifying radar intensities by the phases and amplitudes of the tides. Filtering out signal variations due to wind strength and attenuation of the radar signal will also be applied. Validation Validation will be attempted using data from a POLCOMS-WAM model run for Liverpool Bay at 180m resolution for February 2008 (Brown, 2011), and ongoing work to develop a model at 5m resolution using DELFT3D-FLOW. There are also a series of ADCP and other direct measurements of tidal current and elevation available, although periods of measurement do not all overlap. However, this could still be used for some validation. Conclusion While this work is in very early stages, it could present a method to determine fine-scale variations in tidal phase without a network of current recorders, and an improvement in the accuracy of bathymetric methods using X-band Radar. References Bell, P.S., Bird, C.O., Plater, A.J., 2016. A temporal waterline approach to mapping intertidal areas using X-band marine radar. Coastal Engineering, 07: 84-101. Brown, J.M., Bolaños, R., Wolf, J., 2011. Impact assessment of advanced coupling features in a tide-surge-wave model, POLCOMS-WAM, in a shallow water application. Journal of Marine Systems, 87: 13-24. Deltares, 2010. Delft3D FLOW. Delft: Deltares.

  1. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    PubMed

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  2. SPECHT - single-stage phosphopeptide enrichment and stable-isotope chemical tagging: quantitative phosphoproteomics of insulin action in muscle.

    PubMed

    Kettenbach, Arminja N; Sano, Hiroyuki; Keller, Susanna R; Lienhard, Gustav E; Gerber, Scott A

    2015-01-30

    The study of cellular signaling remains a significant challenge for translational and clinical research. In particular, robust and accurate methods for quantitative phosphoproteomics in tissues and tumors represent significant hurdles for such efforts. In the present work, we design, implement and validate a method for single-stage phosphopeptide enrichment and stable isotope chemical tagging, or SPECHT, that enables the use of iTRAQ, TMT and/or reductive dimethyl-labeling strategies to be applied to phosphoproteomics experiments performed on primary tissue. We develop and validate our approach using reductive dimethyl-labeling and HeLa cells in culture, and find these results indistinguishable from data generated from more traditional SILAC-labeled HeLa cells mixed at the cell level. We apply the SPECHT approach to the quantitative analysis of insulin signaling in a murine myotube cell line and muscle tissue, identify known as well as new phosphorylation events, and validate these phosphorylation sites using phospho-specific antibodies. Taken together, our work validates chemical tagging post-single-stage phosphoenrichment as a general strategy for studying cellular signaling in primary tissues. Through the use of a quantitatively reproducible, proteome-wide phosphopeptide enrichment strategy, we demonstrated the feasibility of post-phosphopeptide purification chemical labeling and tagging as an enabling approach for quantitative phosphoproteomics of primary tissues. Using reductive dimethyl labeling as a generalized chemical tagging strategy, we compared the performance of post-phosphopeptide purification chemical tagging to the well established community standard, SILAC, in insulin-stimulated tissue culture cells. We then extended our method to the analysis of low-dose insulin signaling in murine muscle tissue, and report on the analytical and biological significance of our results. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Semipermeable Hollow Fiber Phantoms for Development and Validation of Perfusion-Sensitive MR Methods and Signal Models

    PubMed Central

    Anderson, J.R.; Ackerman, J.J.H.; Garbow, J.R.

    2015-01-01

    Two semipermeable, hollow fiber phantoms for the validation of perfusion-sensitive magnetic resonance methods and signal models are described. Semipermeable hollow fibers harvested from a standard commercial hemodialysis cartridge serve to mimic tissue capillary function. Flow of aqueous media through the fiber lumen is achieved with a laboratory-grade peristaltic pump. Diffusion of water and solute species (e.g., Gd-based contrast agent) occurs across the fiber wall, allowing exchange between the lumen and the extralumenal space. Phantom design attributes include: i) small physical size, ii) easy and low-cost construction, iii) definable compartment volumes, and iv) experimental control over media content and flow rate. PMID:26167136

  4. Multi-Frequency Signal Detection Based on Frequency Exchange and Re-Scaling Stochastic Resonance and Its Application to Weak Fault Diagnosis.

    PubMed

    Liu, Jinjun; Leng, Yonggang; Lai, Zhihui; Fan, Shengbo

    2018-04-25

    Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method.

  5. Cell Cycle Synchronization of HeLa Cells to Assay EGFR Pathway Activation.

    PubMed

    Wee, Ping; Wang, Zhixiang

    2017-01-01

    Progression through the cell cycle causes changes in the cell's signaling pathways that can alter EGFR signal transduction. Here, we describe drug-derived protocols to synchronize HeLa cells in various phases of the cell cycle, including G1 phase, S phase, G2 phase, and mitosis, specifically in the mitotic stages of prometaphase, metaphase, and anaphase/telophase. The synchronization procedures are designed to allow synchronized cells to be treated for EGF and collected for the purpose of Western blotting for EGFR signal transduction components.S phase synchronization is performed by thymidine block, G2 phase with roscovitine, prometaphase with nocodazole, metaphase with MG132, and anaphase/telophase with blebbistatin. G1 phase synchronization is performed by culturing synchronized mitotic cells obtained by mitotic shake-off. We also provide methods to validate the synchronization methods. For validation by Western blotting, we provide the temporal expression of various cell cycle markers that are used to check the quality of the synchronization. For validation of mitotic synchronization by microscopy, we provide a guide that describes the physical properties of each mitotic stage, using their cellular morphology and DNA appearance. For validation by flow cytometry, we describe the use of imaging flow cytometry to distinguish between the phases of the cell cycle, including between each stage of mitosis.

  6. Experimental and statistical post-validation of positive example EST sequences carrying peroxisome targeting signals type 1 (PTS1)

    PubMed Central

    Lingner, Thomas; Kataya, Amr R. A.; Reumann, Sigrun

    2012-01-01

    We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences.1 As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity.” Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals. PMID:22415050

  7. Experimental and statistical post-validation of positive example EST sequences carrying peroxisome targeting signals type 1 (PTS1).

    PubMed

    Lingner, Thomas; Kataya, Amr R A; Reumann, Sigrun

    2012-02-01

    We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences. As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity." Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals.

  8. An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2018-01-01

    Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.

  9. Experimental validation of wireless communication with chaos.

    PubMed

    Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S; Grebogi, Celso

    2016-08-01

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  10. Experimental validation of wireless communication with chaos

    NASA Astrophysics Data System (ADS)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S.; Grebogi, Celso

    2016-08-01

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.

  11. Experimental validation of wireless communication with chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Hai-Peng; Bai, Chao; Liu, Jian

    The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and anmore » integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.« less

  12. A dual-Kinect approach to determine torso surface motion for respiratory motion correction in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heß, Mirco, E-mail: mirco.hess@uni-muenster.de; Büther, Florian; Dawood, Mohammad

    2015-05-15

    Purpose: Respiratory gating is commonly used to reduce blurring effects and attenuation correction artifacts in positron emission tomography (PET). Established clinically available methods that employ body-attached hardware for acquiring respiration signals rely on the assumption that external surface motion and internal organ motion are well correlated. In this paper, the authors present a markerless method comprising two Microsoft Kinects for determining the motion on the whole torso surface and aim to demonstrate its validity and usefulness—including the potential to study the external/internal correlation and to provide useful information for more advanced correction approaches. Methods: The data of two Kinects aremore » used to calculate 3D representations of a patient’s torso surface with high spatial coverage. Motion signals can be obtained for any position by tracking the mean distance to a virtual camera with a view perpendicular to the surrounding surface. The authors have conducted validation experiments including volunteers and a moving high-precision platform to verify the method’s suitability for providing meaningful data. In addition, the authors employed it during clinical {sup 18}F-FDG-PET scans and exemplarily analyzed the acquired data of ten cancer patients. External signals of abdominal and thoracic regions as well as data-driven signals were used for gating and compared with respect to detected displacement of present lesions. Additionally, the authors quantified signal similarities and time shifts by analyzing cross-correlation sequences. Results: The authors’ results suggest a Kinect depth resolution of approximately 1 mm at 75 cm distance. Accordingly, valid signals could be obtained for surface movements with small amplitudes in the range of only few millimeters. In this small sample of ten patients, the abdominal signals were better suited for gating the PET data than the thoracic signals and the correlation of data-driven signals was found to be stronger with abdominal signals than with thoracic signals (average Pearson correlation coefficients of 0.74 ± 0.17 and 0.45 ± 0.23, respectively). In all cases, except one, the abdominal respiratory motion preceded the thoracic motion—a maximum delay of approximately 600 ms was detected. Conclusions: The method provides motion information with sufficiently high spatial and temporal resolution. Thus, it enables meaningful analysis in the form of comparisons between amplitudes and phase shifts of signals from different regions. In combination with a large field-of-view, as given by combining the data of two Kinect cameras, it yields surface representations that might be useful in the context of motion correction and motion modeling.« less

  13. Skin-electrode impedance measurement during ECG acquisition: method’s validation

    NASA Astrophysics Data System (ADS)

    Casal, Leonardo; La Mura, Guillermo

    2016-04-01

    Skm-electrode impedance measurement can provide valuable information prior. dunng and post electrocardiographic (ECG) or electroencephalographs (EEG) acquisitions. In this work we validate a method for skm-electrode impedance measurement using test circuits with known resistance and capacitor values, at different frequencies for injected excitation current. Finally the method is successfully used for impedance measurement during ECG acquisition on a subject usmg 125 Hz and 6 nA square wave excitation signal at instrumentation amplifier mput. The method can be used for many electrodes configuration.

  14. Detection of delamination defects in CFRP materials using ultrasonic signal processing.

    PubMed

    Benammar, Abdessalem; Drai, Redouane; Guessoum, Abderrezak

    2008-12-01

    In this paper, signal processing techniques are tested for their ability to resolve echoes associated with delaminations in carbon fiber-reinforced polymer multi-layered composite materials (CFRP) detected by ultrasonic methods. These methods include split spectrum processing (SSP) and the expectation-maximization (EM) algorithm. A simulation study on defect detection was performed, and results were validated experimentally on CFRP with and without delamination defects taken from aircraft. Comparison of the methods for their ability to resolve echoes are made.

  15. Wind Turbine Diagnosis under Variable Speed Conditions Using a Single Sensor Based on the Synchrosqueezing Transform Method.

    PubMed

    Guo, Yanjie; Chen, Xuefeng; Wang, Shibin; Sun, Ruobin; Zhao, Zhibin

    2017-05-18

    The gearbox is one of the key components in wind turbines. Gearbox fault signals are usually nonstationary and highly contaminated with noise. The presence of amplitude-modulated and frequency-modulated (AM-FM) characteristics compound the difficulty of precise fault diagnosis of wind turbines, therefore, it is crucial to develop an effective fault diagnosis method for such equipment. This paper presents an improved diagnosis method for wind turbines via the combination of synchrosqueezing transform and local mean decomposition. Compared to the conventional time-frequency analysis techniques, the improved method which is performed in non-real-time can effectively reduce the noise pollution of the signals and preserve the signal characteristics, and hence is suitable for the analysis of nonstationary signals with high noise. This method is further validated by simulated signals and practical vibration data measured from a 1.5 MW wind turbine. The results confirm that the proposed method can simultaneously control the noise and increase the accuracy of time-frequency representation.

  16. Wind Turbine Diagnosis under Variable Speed Conditions Using a Single Sensor Based on the Synchrosqueezing Transform Method

    PubMed Central

    Guo, Yanjie; Chen, Xuefeng; Wang, Shibin; Sun, Ruobin; Zhao, Zhibin

    2017-01-01

    The gearbox is one of the key components in wind turbines. Gearbox fault signals are usually nonstationary and highly contaminated with noise. The presence of amplitude-modulated and frequency-modulated (AM-FM) characteristics compound the difficulty of precise fault diagnosis of wind turbines, therefore, it is crucial to develop an effective fault diagnosis method for such equipment. This paper presents an improved diagnosis method for wind turbines via the combination of synchrosqueezing transform and local mean decomposition. Compared to the conventional time-frequency analysis techniques, the improved method which is performed in non-real-time can effectively reduce the noise pollution of the signals and preserve the signal characteristics, and hence is suitable for the analysis of nonstationary signals with high noise. This method is further validated by simulated signals and practical vibration data measured from a 1.5 MW wind turbine. The results confirm that the proposed method can simultaneously control the noise and increase the accuracy of time-frequency representation. PMID:28524090

  17. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  18. Towards estimation of respiratory muscle effort with respiratory inductance plethysmography signals and complementary ensemble empirical mode decomposition.

    PubMed

    Chen, Ya-Chen; Hsiao, Tzu-Chien

    2018-07-01

    Respiratory inductance plethysmography (RIP) sensor is an inexpensive, non-invasive, easy-to-use transducer for collecting respiratory movement data. Studies have reported that the RIP signal's amplitude and frequency can be used to discriminate respiratory diseases. However, with the conventional approach of RIP data analysis, respiratory muscle effort cannot be estimated. In this paper, the estimation of the respiratory muscle effort through RIP signal was proposed. A complementary ensemble empirical mode decomposition method was used, to extract hidden signals from the RIP signals based on the frequency bands of the activities of different respiratory muscles. To validate the proposed method, an experiment to collect subjects' RIP signal under thoracic breathing (TB) and abdominal breathing (AB) was conducted. The experimental results for both the TB and AB indicate that the proposed method can be used to loosely estimate the activities of thoracic muscles, abdominal muscles, and diaphragm. Graphical abstract ᅟ.

  19. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Adaptive threshold shearlet transform for surface microseismic data denoising

    NASA Astrophysics Data System (ADS)

    Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan

    2018-06-01

    Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.

  1. An Introduction to Normalization and Calibration Methods in Functional MRI

    ERIC Educational Resources Information Center

    Liu, Thomas T.; Glover, Gary H.; Mueller, Bryon A.; Greve, Douglas N.; Brown, Gregory G.

    2013-01-01

    In functional magnetic resonance imaging (fMRI), the blood oxygenation level dependent (BOLD) signal is often interpreted as a measure of neural activity. However, because the BOLD signal reflects the complex interplay of neural, vascular, and metabolic processes, such an interpretation is not always valid. There is growing evidence that changes…

  2. Multi-Frequency Signal Detection Based on Frequency Exchange and Re-Scaling Stochastic Resonance and Its Application to Weak Fault Diagnosis

    PubMed Central

    Leng, Yonggang; Fan, Shengbo

    2018-01-01

    Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method. PMID:29693577

  3. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  4. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    PubMed

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  5. Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.

    PubMed

    Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A

    2013-11-01

    We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.

  6. A novel ECG data compression method based on adaptive Fourier decomposition

    NASA Astrophysics Data System (ADS)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  7. A logic-based method to build signaling networks and propose experimental plans.

    PubMed

    Rougny, Adrien; Gloaguen, Pauline; Langonné, Nathalie; Reiter, Eric; Crépieux, Pascale; Poupon, Anne; Froidevaux, Christine

    2018-05-18

    With the dramatic increase of the diversity and the sheer quantity of biological data generated, the construction of comprehensive signaling networks that include precise mechanisms cannot be carried out manually anymore. In this context, we propose a logic-based method that allows building large signaling networks automatically. Our method is based on a set of expert rules that make explicit the reasoning made by biologists when interpreting experimental results coming from a wide variety of experiment types. These rules allow formulating all the conclusions that can be inferred from a set of experimental results, and thus building all the possible networks that explain these results. Moreover, given an hypothesis, our system proposes experimental plans to carry out in order to validate or invalidate it. To evaluate the performance of our method, we applied our framework to the reconstruction of the FSHR-induced and the EGFR-induced signaling networks. The FSHR is known to induce the transactivation of the EGFR, but very little is known on the resulting FSH- and EGF-dependent network. We built a single network using data underlying both networks. This leads to a new hypothesis on the activation of MEK by p38MAPK, which we validate experimentally. These preliminary results represent a first step in the demonstration of a cross-talk between these two major MAP kinases pathways.

  8. Application of an improved minimum entropy deconvolution method for railway rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei

    2018-07-01

    Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.

  9. Automatic quality assessment and peak identification of auditory brainstem responses with fitted parametric peaks.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis

    2014-05-01

    The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  11. Development of gait segmentation methods for wearable foot pressure sensors.

    PubMed

    Crea, S; De Rossi, S M M; Donati, M; Reberšek, P; Novak, D; Vitiello, N; Lenzi, T; Podobnik, J; Munih, M; Carrozza, M C

    2012-01-01

    We present an automated segmentation method based on the analysis of plantar pressure signals recorded from two synchronized wireless foot insoles. Given the strict limits on computational power and power consumption typical of wearable electronic components, our aim is to investigate the capability of a Hidden Markov Model machine-learning method, to detect gait phases with different levels of complexity in the processing of the wearable pressure sensors signals. Therefore three different datasets are developed: raw voltage values, calibrated sensor signals and a calibrated estimation of total ground reaction force and position of the plantar center of pressure. The method is tested on a pool of 5 healthy subjects, through a leave-one-out cross validation. The results show high classification performances achieved using estimated biomechanical variables, being on average the 96%. Calibrated signals and raw voltage values show higher delays and dispersions in phase transition detection, suggesting a lower reliability for online applications.

  12. A novel method for producing low cost dynamometric wheels based on harmonic elimination techniques

    NASA Astrophysics Data System (ADS)

    Gutiérrez-López, María D.; García de Jalón, Javier; Cubillo, Adrián

    2015-02-01

    A method for producing low cost dynamometric wheels is presented in this paper. For carrying out this method, the metallic part of a commercial wheel is instrumented with strain gauges, which must be grouped in at least three circumferences and in equidistant radial lines. The strain signals of the same circumference are linearly combined to obtain at least two new signals that only depend on the tyre/road contact forces and moments. The influence of factors like the angle rotated by the wheel, the temperature or the centrifugal forces is eliminated in them by removing the continuous component and the largest possible number of harmonics, except the first or the second one, of the strain signals. The contact forces and moments are obtained from these new signals by solving two systems of linear equations with three unknowns each. This method is validated with some theoretical and experimental examples.

  13. Non-contact physiological signal detection using continuous wave Doppler radar.

    PubMed

    Qiao, Dengyu; He, Tan; Hu, Boping; Li, Ye

    2014-01-01

    The aim of this work is to show non-contact physiological signal monitoring system based on continuous-wave (CW) Doppler radar, which is becoming highly attractive in the field of health care monitoring of elderly people. Two radar signal processing methods were introduced in this paper: one to extract respiration and heart rates of a single person and the other to separate mixed respiration signals. To verify the validity of the methods, physiological signal is obtained from stationary human subjects using a CW Doppler radar unit. The sensor operating at 24 GHz is located 0.5 meter away from the subject. The simulation results show that the respiration and heart rates are clearly extracted, and the mixed respiration signals are successfully separated. Finally, reference respiration and heart rate signals are measured by an ECG monitor and compared with the results tracked by the CW Doppler radar monitoring system.

  14. Separation and reconstruction of BCG and EEG signals during continuous EEG and fMRI recordings

    PubMed Central

    Xia, Hongjing; Ruan, Dan; Cohen, Mark S.

    2014-01-01

    Despite considerable effort to remove it, the ballistocardiogram (BCG) remains a major artifact in electroencephalographic data (EEG) acquired inside magnetic resonance imaging (MRI) scanners, particularly in continuous (as opposed to event-related) recordings. In this study, we have developed a new Direct Recording Prior Encoding (DRPE) method to extract and separate the BCG and EEG components from contaminated signals, and have demonstrated its performance by comparing it quantitatively to the popular Optimal Basis Set (OBS) method. Our modified recording configuration allows us to obtain representative bases of the BCG- and EEG-only signals. Further, we have developed an optimization-based reconstruction approach to maximally incorporate prior knowledge of the BCG/EEG subspaces, and of the signal characteristics within them. Both OBS and DRPE methods were tested with experimental data, and compared quantitatively using cross-validation. In the challenging continuous EEG studies, DRPE outperforms the OBS method by nearly sevenfold in separating the continuous BCG and EEG signals. PMID:25002836

  15. A Compound Fault Diagnosis for Rolling Bearings Method Based on Blind Source Separation and Ensemble Empirical Mode Decomposition

    PubMed Central

    Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi

    2014-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals’ separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644

  16. Acoustic⁻Seismic Mixed Feature Extraction Based on Wavelet Transform for Vehicle Classification in Wireless Sensor Networks.

    PubMed

    Zhang, Heng; Pan, Zhongming; Zhang, Wenna

    2018-06-07

    An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.

  17. Novel Signal Noise Reduction Method through Cluster Analysis, Applied to Photoplethysmography.

    PubMed

    Waugh, William; Allen, John; Wightman, James; Sims, Andrew J; Beale, Thomas A W

    2018-01-01

    Physiological signals can often become contaminated by noise from a variety of origins. In this paper, an algorithm is described for the reduction of sporadic noise from a continuous periodic signal. The design can be used where a sample of a periodic signal is required, for example, when an average pulse is needed for pulse wave analysis and characterization. The algorithm is based on cluster analysis for selecting similar repetitions or pulses from a periodic single. This method selects individual pulses without noise, returns a clean pulse signal, and terminates when a sufficiently clean and representative signal is received. The algorithm is designed to be sufficiently compact to be implemented on a microcontroller embedded within a medical device. It has been validated through the removal of noise from an exemplar photoplethysmography (PPG) signal, showing increasing benefit as the noise contamination of the signal increases. The algorithm design is generalised to be applicable for a wide range of physiological (physical) signals.

  18. Medical application of artificial immune recognition system (AIRS): diagnosis of atherosclerosis from carotid artery Doppler signals.

    PubMed

    Latifoğlu, Fatma; Kodaz, Halife; Kara, Sadik; Güneş, Salih

    2007-08-01

    This study was conducted to distinguish between atherosclerosis and healthy subjects. Hence, we have employed the maximum envelope of the carotid artery Doppler sonograms derived from Fast Fourier Transformation-Welch method and Artificial Immune Recognition System (AIRS). The fuzzy appearance of the carotid artery Doppler signals makes physicians suspicious about the existence of diseases and sometimes causes false diagnosis. Our technique gets around this problem using AIRS to decide and assist the physician to make the final judgment in confidence. AIRS has reached 99.29% classification accuracy using 10-fold cross validation. Results show that the proposed method classified Doppler signals successfully.

  19. Layover and shadow detection based on distributed spaceborne single-baseline InSAR

    NASA Astrophysics Data System (ADS)

    Huanxin, Zou; Bin, Cai; Changzhou, Fan; Yun, Ren

    2014-03-01

    Distributed spaceborne single-baseline InSAR is an effective technique to get high quality Digital Elevation Model. Layover and Shadow are ubiquitous phenomenon in SAR images because of geometric relation of SAR imaging. In the signal processing of single-baseline InSAR, the phase singularity of Layover and Shadow leads to the phase difficult to filtering and unwrapping. This paper analyzed the geometric and signal model of the Layover and Shadow fields. Based on the interferometric signal autocorrelation matrix, the paper proposed the signal number estimation method based on information theoretic criteria, to distinguish Layover and Shadow from normal InSAR fields. The effectiveness and practicability of the method proposed in the paper are validated in the simulation experiments and theoretical analysis.

  20. 40 CFR 1065.275 - N2O measurement devices.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... for interpretation of infrared spectra. For example, EPA Test Method 320 is considered a valid method... uncompensated signal's bias. Examples of laser infrared analyzers are pulsed-mode high-resolution narrow band.... Examples of acceptable columns are a PLOT column consisting of bonded polystyrene-divinylbenzene or a...

  1. How to detect and reduce movement artifacts in near-infrared imaging using moving standard deviation and spline interpolation.

    PubMed

    Scholkmann, F; Spichtig, S; Muehlemann, T; Wolf, M

    2010-05-01

    Near-infrared imaging (NIRI) is a neuroimaging technique which enables us to non-invasively measure hemodynamic changes in the human brain. Since the technique is very sensitive, the movement of a subject can cause movement artifacts (MAs), which affect the signal quality and results to a high degree. No general method is yet available to reduce these MAs effectively. The aim was to develop a new MA reduction method. A method based on moving standard deviation and spline interpolation was developed. It enables the semi-automatic detection and reduction of MAs in the data. It was validated using simulated and real NIRI signals. The results show that a significant reduction of MAs and an increase in signal quality are achieved. The effectiveness and usability of the method is demonstrated by the improved detection of evoked hemodynamic responses. The present method can not only be used in the postprocessing of NIRI signals but also for other kinds of data containing artifacts, for example ECG or EEG signals.

  2. Gear fault diagnosis based on the structured sparsity time-frequency analysis

    NASA Astrophysics Data System (ADS)

    Sun, Ruobin; Yang, Zhibo; Chen, Xuefeng; Tian, Shaohua; Xie, Yong

    2018-03-01

    Over the last decade, sparse representation has become a powerful paradigm in mechanical fault diagnosis due to its excellent capability and the high flexibility for complex signal description. The structured sparsity time-frequency analysis (SSTFA) is a novel signal processing method, which utilizes mixed-norm priors on time-frequency coefficients to obtain a fine match for the structure of signals. In order to extract the transient feature from gear vibration signals, a gear fault diagnosis method based on SSTFA is proposed in this work. The steady modulation components and impulsive components of the defective gear vibration signals can be extracted simultaneously by choosing different time-frequency neighborhood and generalized thresholding operators. Besides, the time-frequency distribution with high resolution is obtained by piling different components in the same diagram. The diagnostic conclusion can be made according to the envelope spectrum of the impulsive components or by the periodicity of impulses. The effectiveness of the method is verified by numerical simulations, and the vibration signals registered from a gearbox fault simulator and a wind turbine. To validate the efficiency of the presented methodology, comparisons are made among some state-of-the-art vibration separation methods and the traditional time-frequency analysis methods. The comparisons show that the proposed method possesses advantages in separating feature signals under strong noise and accounting for the inner time-frequency structure of the gear vibration signals.

  3. SPEPlip: the detection of signal peptide and lipoprotein cleavage sites.

    PubMed

    Fariselli, Piero; Finocchiaro, Giacomo; Casadio, Rita

    2003-12-12

    SPEPlip is a neural network-based method, trained and tested on a set of experimentally derived signal peptides from eukaryotes and prokaryotes. SPEPlip identifies the presence of sorting signals and predicts their cleavage sites. The accuracy in cross-validation is similar to that of other available programs: the rate of false positives is 4 and 6%, for prokaryotes and eukaryotes respectively and that of false negatives is 3% in both cases. When a set of 409 prokaryotic lipoproteins is predicted, SPEPlip predicts 97% of the chains in the signal peptide class. However, by integrating SPEPlip with a regular expression search utility based on the PROSITE pattern, we can successfully discriminate signal peptide-containing chains from lipoproteins. We propose the method for detecting and discriminating signal peptides containing chains and lipoproteins. It can be accessed through the web page at http://gpcr.biocomp.unibo.it/predictors/

  4. Signal Construction-Based Dispersion Compensation of Lamb Waves Considering Signal Waveform and Amplitude Spectrum Preservation

    PubMed Central

    Cai, Jian; Yuan, Shenfang; Wang, Tongguang

    2016-01-01

    The results of Lamb wave identification for the aerospace structures could be easily affected by the nonlinear-dispersion characteristics. In this paper, dispersion compensation of Lamb waves is of particular concern. Compared with the similar research works on the traditional signal domain transform methods, this study is based on signal construction from the viewpoint of nonlinear wavenumber linearization. Two compensation methods of linearly-dispersive signal construction (LDSC) and non-dispersive signal construction (NDSC) are proposed. Furthermore, to improve the compensation effect, the influence of the signal construction process on the other crucial signal properties, including the signal waveform and amplitude spectrum, is considered during the investigation. The linear-dispersion and non-dispersion effects are firstly analyzed. Then, after the basic signal construction principle is explored, the numerical realization of LDSC and NDSC is discussed, in which the signal waveform and amplitude spectrum preservation is especially regarded. Subsequently, associated with the delay-and-sum algorithm, LDSC or NDSC is employed for high spatial resolution damage imaging, so that the adjacent multi-damage or quantitative imaging capacity of Lamb waves can be strengthened. To verify the proposed signal construction and damage imaging methods, the experimental and numerical validation is finally arranged on the aluminum plates. PMID:28772366

  5. Signal Construction-Based Dispersion Compensation of Lamb Waves Considering Signal Waveform and Amplitude Spectrum Preservation.

    PubMed

    Cai, Jian; Yuan, Shenfang; Wang, Tongguang

    2016-12-23

    The results of Lamb wave identification for the aerospace structures could be easily affected by the nonlinear-dispersion characteristics. In this paper, dispersion compensation of Lamb waves is of particular concern. Compared with the similar research works on the traditional signal domain transform methods, this study is based on signal construction from the viewpoint of nonlinear wavenumber linearization. Two compensation methods of linearly-dispersive signal construction (LDSC) and non-dispersive signal construction (NDSC) are proposed. Furthermore, to improve the compensation effect, the influence of the signal construction process on the other crucial signal properties, including the signal waveform and amplitude spectrum, is considered during the investigation. The linear-dispersion and non-dispersion effects are firstly analyzed. Then, after the basic signal construction principle is explored, the numerical realization of LDSC and NDSC is discussed, in which the signal waveform and amplitude spectrum preservation is especially regarded. Subsequently, associated with the delay-and-sum algorithm, LDSC or NDSC is employed for high spatial resolution damage imaging, so that the adjacent multi-damage or quantitative imaging capacity of Lamb waves can be strengthened. To verify the proposed signal construction and damage imaging methods, the experimental and numerical validation is finally arranged on the aluminum plates.

  6. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  7. Signal evaluation environment: a new method for the design of peripheral in-vehicle warning signals.

    PubMed

    Werneke, Julia; Vollrath, Mark

    2011-06-01

    An evaluation method called the Signal Evaluation Environment (SEE) was developed for use in the early stages of the design process of peripheral warning signals while driving. Accident analyses have shown that with complex driving situations such as intersections, the visual scan strategies of the driver contribute to overlooking other road users who have the right of way. Salient peripheral warning signals could disrupt these strategies and direct drivers' attention towards these road users. To select effective warning signals, the SEE was developed as a laboratory task requiring visual-cognitive processes similar to those used at intersections. For validation of the SEE, four experiments were conducted using different stimulus characteristics (size, colour contrast, shape, flashing) that influence peripheral vision. The results confirm that the SEE is able to differentiate between the selected stimulus characteristics. The SEE is a useful initial tool for designing peripheral signals, allowing quick and efficient preselection of beneficial signals.

  8. Rolling Bearing Fault Diagnosis Based on an Improved HTT Transform

    PubMed Central

    Tang, Guiji; Tian, Tian; Zhou, Chong

    2018-01-01

    When rolling bearing failure occurs, vibration signals generally contain different signal components, such as impulsive fault feature signals, background noise and harmonic interference signals. One of the most challenging aspects of rolling bearing fault diagnosis is how to inhibit noise and harmonic interference signals, while enhancing impulsive fault feature signals. This paper presents a novel bearing fault diagnosis method, namely an improved Hilbert time–time (IHTT) transform, by combining a Hilbert time–time (HTT) transform with principal component analysis (PCA). Firstly, the HTT transform was performed on vibration signals to derive a HTT transform matrix. Then, PCA was employed to de-noise the HTT transform matrix in order to improve the robustness of the HTT transform. Finally, the diagonal time series of the de-noised HTT transform matrix was extracted as the enhanced impulsive fault feature signal and the contained fault characteristic information was identified through further analyses of amplitude and envelope spectrums. Both simulated and experimental analyses validated the superiority of the presented method for detecting bearing failures. PMID:29662013

  9. Target Detection and Classification Using Seismic and PIR Sensors

    DTIC Science & Technology

    2012-06-01

    time series analysis via wavelet - based partitioning,” Signal Process...regard, this paper presents a wavelet - based method for target detection and classification. The proposed method has been validated on data sets of...The work reported in this paper makes use of a wavelet - based feature extraction method , called Symbolic Dynamic Filtering (SDF) [12]–[14]. The

  10. Design and fabrication of a multi-layered solid dynamic phantom: validation platform on methods for reducing scalp-hemodynamic effect from fNIRS signal

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Hiroshi; Tanikawa, Yukari; Yamada, Toru

    2017-02-01

    Scalp hemodynamics contaminates the signals from functional near-infrared spectroscopy (fNIRS). Numerous methods have been proposed to reduce this contamination, but no golden standard has yet been established. Here we constructed a multi-layered solid phantom to experimentally validate such methods. This phantom comprises four layers corresponding to epidermides, dermis/skull (upper dynamic layer), cerebrospinal fluid and brain (lower dynamic layer) and the thicknesses of these layers were 0.3, 10, 1, and 50 mm, respectively. The epidermides and cerebrospinal fluid layers were made of polystyrene and an acrylic board, respectively. Both of these dynamic layers were made of epoxy resin. An infrared dye and titanium dioxide were mixed to match their absorption and reduced scattering coefficients (μa and μs', respectively) with those of biological tissues. The bases of both upper and lower dynamic layers have a slot for laterally sliding a bar that holds an absorber piece. This bar was laterally moved using a programmable stepping motor. The optical properties of dynamic layers were estimated based on the transmittance and reflectance using the Monte Carlo look-up table method. The estimated coefficients for lower and upper dynamic layers approximately coincided with those for biological tissues. We confirmed that the preliminary fNIRS measurement using the fabricated phantom showed that the signals from the brain layer were recovered if those from the dermis layer were completely removed from their mixture, indicating that the phantom is useful for evaluating methods for reducing the contamination of the signals from the scalp.

  11. Hidden pattern discovery on epileptic EEG with 1-D local binary patterns and epileptic seizures detection by grey relational analysis.

    PubMed

    Kaya, Yılmaz

    2015-09-01

    This paper proposes a novel approach to detect epilepsy seizures by using Electroencephalography (EEG), which is one of the most common methods for the diagnosis of epilepsy, based on 1-Dimension Local Binary Pattern (1D-LBP) and grey relational analysis (GRA) methods. The main aim of this paper is to evaluate and validate a novel approach, which is a computer-based quantitative EEG analyzing method and based on grey systems, aimed to help decision-maker. In this study, 1D-LBP, which utilizes all data points, was employed for extracting features in raw EEG signals, Fisher score (FS) was employed to select the representative features, which can also be determined as hidden patterns. Additionally, GRA is performed to classify EEG signals through these Fisher scored features. The experimental results of the proposed approach, which was employed in a public dataset for validation, showed that it has a high accuracy in identifying epileptic EEG signals. For various combinations of epileptic EEG, such as A-E, B-E, C-E, D-E, and A-D clusters, 100, 96, 100, 99.00 and 100% were achieved, respectively. Also, this work presents an attempt to develop a new general-purpose hidden pattern determination scheme, which can be utilized for different categories of time-varying signals.

  12. Modeling of Receptor Tyrosine Kinase Signaling: Computational and Experimental Protocols.

    PubMed

    Fey, Dirk; Aksamitiene, Edita; Kiyatkin, Anatoly; Kholodenko, Boris N

    2017-01-01

    The advent of systems biology has convincingly demonstrated that the integration of experiments and dynamic modelling is a powerful approach to understand the cellular network biology. Here we present experimental and computational protocols that are necessary for applying this integrative approach to the quantitative studies of receptor tyrosine kinase (RTK) signaling networks. Signaling by RTKs controls multiple cellular processes, including the regulation of cell survival, motility, proliferation, differentiation, glucose metabolism, and apoptosis. We describe methods of model building and training on experimentally obtained quantitative datasets, as well as experimental methods of obtaining quantitative dose-response and temporal dependencies of protein phosphorylation and activities. The presented methods make possible (1) both the fine-grained modeling of complex signaling dynamics and identification of salient, course-grained network structures (such as feedback loops) that bring about intricate dynamics, and (2) experimental validation of dynamic models.

  13. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  14. Study and application of acoustic emission testing in fault diagnosis of low-speed heavy-duty gears.

    PubMed

    Gao, Lixin; Zai, Fenlou; Su, Shanbin; Wang, Huaqing; Chen, Peng; Liu, Limei

    2011-01-01

    Most present studies on the acoustic emission signals of rotating machinery are experiment-oriented, while few of them involve on-spot applications. In this study, a method of redundant second generation wavelet transform based on the principle of interpolated subdivision was developed. With this method, subdivision was not needed during the decomposition. The lengths of approximation signals and detail signals were the same as those of original ones, so the data volume was twice that of original signals; besides, the data redundancy characteristic also guaranteed the excellent analysis effect of the method. The analysis of the acoustic emission data from the faults of on-spot low-speed heavy-duty gears validated the redundant second generation wavelet transform in the processing and denoising of acoustic emission signals. Furthermore, the analysis illustrated that the acoustic emission testing could be used in the fault diagnosis of on-spot low-speed heavy-duty gears and could be a significant supplement to vibration testing diagnosis.

  15. Study and Application of Acoustic Emission Testing in Fault Diagnosis of Low-Speed Heavy-Duty Gears

    PubMed Central

    Gao, Lixin; Zai, Fenlou; Su, Shanbin; Wang, Huaqing; Chen, Peng; Liu, Limei

    2011-01-01

    Most present studies on the acoustic emission signals of rotating machinery are experiment-oriented, while few of them involve on-spot applications. In this study, a method of redundant second generation wavelet transform based on the principle of interpolated subdivision was developed. With this method, subdivision was not needed during the decomposition. The lengths of approximation signals and detail signals were the same as those of original ones, so the data volume was twice that of original signals; besides, the data redundancy characteristic also guaranteed the excellent analysis effect of the method. The analysis of the acoustic emission data from the faults of on-spot low-speed heavy-duty gears validated the redundant second generation wavelet transform in the processing and denoising of acoustic emission signals. Furthermore, the analysis illustrated that the acoustic emission testing could be used in the fault diagnosis of on-spot low-speed heavy-duty gears and could be a significant supplement to vibration testing diagnosis. PMID:22346592

  16. Residual translation compensations in radar target narrowband imaging based on trajectory information

    NASA Astrophysics Data System (ADS)

    Yue, Wenjue; Peng, Bo; Wei, Xizhang; Li, Xiang; Liao, Dongping

    2018-05-01

    High velocity translation will result in defocusing scattering centers in radar imaging. In this paper, we propose a Residual Translation Compensations (RTC) method based on target trajectory information to eliminate the translation effects in radar imaging. Translation could not be simply regarded as a uniformly accelerated motion in reality. So the prior knowledge of the target trajectory is introduced to enhance compensation precision. First we use the two-body orbit model to figure out the radial distance. Then, stepwise compensations are applied to eliminate residual propagation delay based on conjugate multiplication method. Finally, tomography is used to confirm the validity of the method. Compare with translation parameters estimation method based on the spectral peak of the conjugate multiplied signal, RTC method in this paper enjoys a better tomography result. When the Signal Noise Ratio (SNR) of the radar echo signal is 4dB, the scattering centers can also be extracted clearly.

  17. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  18. Detection of the Vibration Signal from Human Vocal Folds Using a 94-GHz Millimeter-Wave Radar

    PubMed Central

    Chen, Fuming; Li, Sheng; Zhang, Yang; Wang, Jianqi

    2017-01-01

    The detection of the vibration signal from human vocal folds provides essential information for studying human phonation and diagnosing voice disorders. Doppler radar technology has enabled the noncontact measurement of the human-vocal-fold vibration. However, existing systems must be placed in close proximity to the human throat and detailed information may be lost because of the low operating frequency. In this paper, a long-distance detection method, involving the use of a 94-GHz millimeter-wave radar sensor, is proposed for detecting the vibration signals from human vocal folds. An algorithm that combines empirical mode decomposition (EMD) and the auto-correlation function (ACF) method is proposed for detecting the signal. First, the EMD method is employed to suppress the noise of the radar-detected signal. Further, the ratio of the energy and entropy is used to detect voice activity in the radar-detected signal, following which, a short-time ACF is employed to extract the vibration signal of the human vocal folds from the processed signal. For validating the method and assessing the performance of the radar system, a vibration measurement sensor and microphone system are additionally employed for comparison. The experimental results obtained from the spectrograms, the vibration frequency of the vocal folds, and coherence analysis demonstrate that the proposed method can effectively detect the vibration of human vocal folds from a long detection distance. PMID:28282892

  19. Time and frequency pump-probe multiplexing to enhance the signal response of Brillouin optical time-domain analyzers.

    PubMed

    Soto, Marcelo A; Ricchiuti, Amelia Lavinia; Zhang, Liang; Barrera, David; Sales, Salvador; Thévenaz, Luc

    2014-11-17

    A technique to enhance the response and performance of Brillouin distributed fiber sensors is proposed and experimentally validated. The method consists in creating a multi-frequency pump pulse interacting with a matching multi-frequency continuous-wave probe. To avoid nonlinear cross-interaction between spectral lines, the method requires that the distinct pump pulse components and temporal traces reaching the photo-detector are subject to wavelength-selective delaying. This way the total pump and probe powers launched into the fiber can be incrementally boosted beyond the thresholds imposed by nonlinear effects. As a consequence of the multiplied pump-probe Brillouin interactions occurring along the fiber, the sensor response can be enhanced in exact proportion to the number of spectral components. The method is experimentally validated in a 50 km-long distributed optical fiber sensor augmented to 3 pump-probe spectral pairs, demonstrating a signal-to-noise ratio enhancement of 4.8 dB.

  20. Antenna reconfiguration verification and validation

    NASA Technical Reports Server (NTRS)

    Becker, Robert C. (Inventor); Meyers, David W. (Inventor); Muldoon, Kelly P. (Inventor); Carlson, Douglas R. (Inventor); Drexler, Jerome P. (Inventor)

    2009-01-01

    A method of testing the electrical functionality of an optically controlled switch in a reconfigurable antenna is provided. The method includes configuring one or more conductive paths between one or more feed points and one or more test point with switches in the reconfigurable antenna. Applying one or more test signals to the one or more feed points. Monitoring the one or more test points in response to the one or more test signals and determining the functionality of the switch based upon the monitoring of the one or more test points.

  1. A novel method for the line-of-response and time-of-flight reconstruction in TOF-PET detectors based on a library of synchronized model signals

    NASA Astrophysics Data System (ADS)

    Moskal, P.; Zoń, N.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kamińska, D.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zieliński, M.

    2015-03-01

    A novel method of hit time and hit position reconstruction in scintillator detectors is described. The method is based on comparison of detector signals with results stored in a library of synchronized model signals registered for a set of well-defined positions of scintillation points. The hit position is reconstructed as the one corresponding to the signal from the library which is most similar to the measurement signal. The time of the interaction is determined as a relative time between the measured signal and the most similar one in the library. A degree of similarity of measured and model signals is defined as the distance between points representing the measurement- and model-signal in the multi-dimensional measurement space. Novelty of the method lies also in the proposed way of synchronization of model signals enabling direct determination of the difference between time-of-flights (TOF) of annihilation quanta from the annihilation point to the detectors. The introduced method was validated using experimental data obtained by means of the double strip prototype of the J-PET detector and 22Na sodium isotope as a source of annihilation gamma quanta. The detector was built out from plastic scintillator strips with dimensions of 5 mm×19 mm×300 mm, optically connected at both sides to photomultipliers, from which signals were sampled by means of the Serial Data Analyzer. Using the introduced method, the spatial and TOF resolution of about 1.3 cm (σ) and 125 ps (σ) were established, respectively.

  2. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution.

    PubMed

    Cho, Sanghee; Grazioso, Ron; Zhang, Nan; Aykac, Mehmet; Schmand, Matthias

    2011-12-07

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  3. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  4. Increased circulating cell signalling phosphoproteins in sera are useful for the detection of pancreatic cancer

    PubMed Central

    Takano, S; Sogawa, K; Yoshitomi, H; Shida, T; Mogushi, K; Kimura, F; Shimizu, H; Yoshidome, H; Ohtsuka, M; Kato, A; Ishihara, T; Tanaka, H; Yokosuka, O; Nomura, F; Miyazaki, M

    2010-01-01

    Background: Intracellular phosphoprotein activation significantly regulates cancer progression. However, the significance of circulating phosphoproteins in the blood remains unknown. We investigated the serum phosphoprotein profile involved in pancreatic cancer (PaCa) by a novel approach that comprehensively measured serum phosphoproteins levels, and clinically applied this method to the detection of PaCa. Methods: We analysed the serum phosphoproteins that comprised cancer cellular signal pathways by comparing sera from PaCa patients and benign controls including healthy volunteers (HVs) and pancreatitis patients. Results: Hierarchical clustering analysis between PaCa patients and HVs revealed differential pathway-specific profiles. In particular, the components of the extracellular signal-regulated kinase (ERK) signalling pathway were significantly increased in the sera of PaCa patients compared with HVs. The positive rate of p-ERK1/2 (82%) was found to be superior to that of CA19-9 (53%) for early stage PaCa. For the combination of these serum levels, the area under the receiver-operator characteristics curves was showing significant ability to distinguish between the two populations in independent validation set, and between cancer and non-cancer populations in another validation set. Conclusion: The comprehensive measurement of serum cell signal phosphoproteins is useful for the detection of PaCa. Further investigations will lead to the implementation of tailor-made molecular-targeted therapeutics. PMID:20551957

  5. Electronic system for floor surface type detection in robotics applications

    NASA Astrophysics Data System (ADS)

    Tarapata, Grzegorz; Paczesny, Daniel; Tarasiuk, Łukasz

    2016-11-01

    The paper reports a recognizing method base on ultrasonic transducers utilized for the surface types detection. Ultra-sonic signal is transmitted toward the examined substrate, then reflected and scattered signal goes back to another ultra-sonic receiver. Thee measuring signal is generated by a piezo-electric transducer located at specified distance from the tested substrate. The detector is a second piezo-electric transducer located next to the transmitter. Depending on thee type of substrate which is exposed by an ultrasonic wave, the signal is partially absorbed inn the material, diffused and reflected towards the receiver. To measure the level of received signal, the dedicated electronic circuit was design and implemented in the presented systems. Such system was designed too recognize two types of floor surface: solid (like concrete, ceramic stiles, wood) and soft (carpets, floor coverings). The method will be applied in electronic detection system dedicated to autonomous cleaning robots due to selection of appropriate cleaning method. This work presents the concept of ultrasonic signals utilization, the design of both the measurement system and the measuring stand and as well number of wide tests results which validates correctness of applied ultrasonic method.

  6. A design of a valid signal selecting and position decoding ASIC for PET using silicon photomultipliers

    NASA Astrophysics Data System (ADS)

    Cho, M.; Lim, K.-t.; Kim, H.; Yeom, J.-y.; Kim, J.; Lee, C.; Choi, H.; Cho, G.

    2017-01-01

    In most cases, a PET system has numerous electrical components and channel circuits and thus it would rather be a bulky product. Also, most existing systems receive analog signals from detectors which make them vulnerable to signal distortions. For these reasons, channel reduction techniques are important. In this work, an ASIC for PET module is being proposed. An ASIC chip for 16 PET detector channels, VSSPDC, has been designed and simulated. The main function of the chip is 16-to-1 channel reduction, i.e., finding the position of only the valid signals, signal timing, and magnitudes in all 16 channels at every recorded event. The ASIC comprises four of 4-channel modules and a 2nd 4-to-1 router. A single channel module comprises a transimpedance amplifier for the silicon photomultipliers, dual comparators with high and low level references, and a logic circuitry. While the high level reference was used to test the validity of the signal, the low level reference was used for the timing. The 1-channel module of the ASIC produced an energy pulse by time-over-threshold method and it also produced a time pulse with a fixed delayed time. Since the ASIC chip outputs only a few digital pulses and does not require an external clock, it has an advantage over noise properties. The cadence simulation showed the good performance of the chip as designed.

  7. Development and validation of an environmentally friendly attenuated total reflectance in the mid-infrared region method for the determination of ethanol content in used engine lubrication oil.

    PubMed

    Hatanaka, Rafael Rodrigues; Sequinel, Rodrigo; Gualtieri, Carlos Eduardo; Tercini, Antônio Carlos Bergamaschi; Flumignan, Danilo Luiz; de Oliveira, José Eduardo

    2013-05-15

    Lubricating oils are crucial in the operation of automotive engines because they both reduce friction between moving parts and protect against corrosion. However, the performance of lubricant oil may be affected by contaminants, such as gasoline, diesel, ethanol, water and ethylene glycol. Although there are many standard methods and studies related to the quantification of contaminants in lubricant oil, such as gasoline and diesel oil, to the best of our knowledge, no methods have been reported for the quantification of ethanol in used Otto cycle engine lubrication oils. Therefore, this work aimed at the development and validation of a routine method based on partial least-squares multivariate analysis combined with attenuated total reflectance in the mid-infrared region to quantify ethanol content in used lubrication oil. The method was validated based on its figures of merit (using the net analyte signal) as follows: limit of detection (0.049%), limit of quantification (0.16%), accuracy (root mean square error of prediction=0.089% w/w), repeatability (0.05% w/w), fit (R(2)=0.9997), mean selectivity (0.047), sensitivity (0.011), inverse analytical sensitivity (0.016% w/w(-1)) and signal-to-noise ratio (max: 812.4 and min: 200.9). The results show that the proposed method can be routinely implemented for the quality control of lubricant oils. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  9. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  10. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).

    PubMed

    Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-05-16

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.

  11. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)

    PubMed Central

    Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-01-01

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731

  12. Tacholess order-tracking approach for wind turbine gearbox fault detection

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Xie, Yong; Xu, Guanghua; Zhang, Sicong; Hou, Chenggang

    2017-09-01

    Monitoring of wind turbines under variable-speed operating conditions has become an important issue in recent years. The gearbox of a wind turbine is the most important transmission unit; it generally exhibits complex vibration signatures due to random variations in operating conditions. Spectral analysis is one of the main approaches in vibration signal processing. However, spectral analysis is based on a stationary assumption and thus inapplicable to the fault diagnosis of wind turbines under variable-speed operating conditions. This constraint limits the application of spectral analysis to wind turbine diagnosis in industrial applications. Although order-tracking methods have been proposed for wind turbine fault detection in recent years, current methods are only applicable to cases in which the instantaneous shaft phase is available. For wind turbines with limited structural spaces, collecting phase signals with tachometers or encoders is difficult. In this study, a tacholess order-tracking method for wind turbines is proposed to overcome the limitations of traditional techniques. The proposed method extracts the instantaneous phase from the vibration signal, resamples the signal at equiangular increments, and calculates the order spectrum for wind turbine fault identification. The effectiveness of the proposed method is experimentally validated with the vibration signals of wind turbines.

  13. A Novel Estimator for the Rate of Information Transfer by Continuous Signals

    PubMed Central

    Takalo, Jouni; Ignatova, Irina; Weckström, Matti; Vähäsöyrinki, Mikko

    2011-01-01

    The information transfer rate provides an objective and rigorous way to quantify how much information is being transmitted through a communications channel whose input and output consist of time-varying signals. However, current estimators of information content in continuous signals are typically based on assumptions about the system's linearity and signal statistics, or they require prohibitive amounts of data. Here we present a novel information rate estimator without these limitations that is also optimized for computational efficiency. We validate the method with a simulated Gaussian information channel and demonstrate its performance with two example applications. Information transfer between the input and output signals of a nonlinear system is analyzed using a sensory receptor neuron as the model system. Then, a climate data set is analyzed to demonstrate that the method can be applied to a system based on two outputs generated by interrelated random processes. These analyses also demonstrate that the new method offers consistent performance in situations where classical methods fail. In addition to these examples, the method is applicable to a wide range of continuous time series commonly observed in the natural sciences, economics and engineering. PMID:21494562

  14. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  15. A network model of genomic hormone interactions underlying dementia and its translational validation through serendipitous off-target effect

    PubMed Central

    2013-01-01

    Background While the majority of studies have focused on the association between sex hormones and dementia, emerging evidence supports the role of other hormone signals in increasing dementia risk. However, due to the lack of an integrated view on mechanistic interactions of hormone signaling pathways associated with dementia, molecular mechanisms through which hormones contribute to the increased risk of dementia has remained unclear and capacity of translating hormone signals to potential therapeutic and diagnostic applications in relation to dementia has been undervalued. Methods Using an integrative knowledge- and data-driven approach, a global hormone interaction network in the context of dementia was constructed, which was further filtered down to a model of convergent hormone signaling pathways. This model was evaluated for its biological and clinical relevance through pathway recovery test, evidence-based analysis, and biomarker-guided analysis. Translational validation of the model was performed using the proposed novel mechanism discovery approach based on ‘serendipitous off-target effects’. Results Our results reveal the existence of a well-connected hormone interaction network underlying dementia. Seven hormone signaling pathways converge at the core of the hormone interaction network, which are shown to be mechanistically linked to the risk of dementia. Amongst these pathways, estrogen signaling pathway takes the major part in the model and insulin signaling pathway is analyzed for its association to learning and memory functions. Validation of the model through serendipitous off-target effects suggests that hormone signaling pathways substantially contribute to the pathogenesis of dementia. Conclusions The integrated network model of hormone interactions underlying dementia may serve as an initial translational platform for identifying potential therapeutic targets and candidate biomarkers for dementia-spectrum disorders such as Alzheimer’s disease. PMID:23885764

  16. Sample entropy analysis for the estimating depth of anaesthesia through human EEG signal at different levels of unconsciousness during surgeries.

    PubMed

    Liu, Quan; Ma, Li; Fan, Shou-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2018-01-01

    Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients' anaesthetic level during surgeries.

  17. Compressive Sensing of Roller Bearing Faults via Harmonic Detection from Under-Sampled Vibration Signals

    PubMed Central

    Tang, Gang; Hou, Wei; Wang, Huaqing; Luo, Ganggang; Ma, Jianwei

    2015-01-01

    The Shannon sampling principle requires substantial amounts of data to ensure the accuracy of on-line monitoring of roller bearing fault signals. Challenges are often encountered as a result of the cumbersome data monitoring, thus a novel method focused on compressed vibration signals for detecting roller bearing faults is developed in this study. Considering that harmonics often represent the fault characteristic frequencies in vibration signals, a compressive sensing frame of characteristic harmonics is proposed to detect bearing faults. A compressed vibration signal is first acquired from a sensing matrix with information preserved through a well-designed sampling strategy. A reconstruction process of the under-sampled vibration signal is then pursued as attempts are conducted to detect the characteristic harmonics from sparse measurements through a compressive matching pursuit strategy. In the proposed method bearing fault features depend on the existence of characteristic harmonics, as typically detected directly from compressed data far before reconstruction completion. The process of sampling and detection may then be performed simultaneously without complete recovery of the under-sampled signals. The effectiveness of the proposed method is validated by simulations and experiments. PMID:26473858

  18. An Internet of Things based physiological signal monitoring and receiving system for virtual enhanced health care network.

    PubMed

    Rajan, J Pandia; Rajan, S Edward

    2018-01-01

    Wireless physiological signal monitoring system designing with secured data communication in the health care system is an important and dynamic process. We propose a signal monitoring system using NI myRIO connected with the wireless body sensor network through multi-channel signal acquisition method. Based on the server side validation of the signal, the data connected to the local server is updated in the cloud. The Internet of Things (IoT) architecture is used to get the mobility and fast access of patient data to healthcare service providers. This research work proposes a novel architecture for wireless physiological signal monitoring system using ubiquitous healthcare services by virtual Internet of Things. We showed an improvement in method of access and real time dynamic monitoring of physiological signal of this remote monitoring system using virtual Internet of thing approach. This remote monitoring and access system is evaluated in conventional value. This proposed system is envisioned to modern smart health care system by high utility and user friendly in clinical applications. We claim that the proposed scheme significantly improves the accuracy of the remote monitoring system compared to the other wireless communication methods in clinical system.

  19. Mover Position Detection for PMTLM Based on Linear Hall Sensors through EKF Processing

    PubMed Central

    Yan, Leyang; Zhang, Hui; Ye, Peiqing

    2017-01-01

    Accurate mover position is vital for a permanent magnet tubular linear motor (PMTLM) control system. In this paper, two linear Hall sensors are utilized to detect the mover position. However, Hall sensor signals contain third-order harmonics, creating errors in mover position detection. To filter out the third-order harmonics, a signal processing method based on the extended Kalman filter (EKF) is presented. The limitation of conventional processing method is first analyzed, and then EKF is adopted to detect the mover position. In the EKF model, the amplitude of the fundamental component and the percentage of the harmonic component are taken as state variables, and they can be estimated based solely on the measured sensor signals. Then, the harmonic component can be calculated and eliminated. The proposed method has the advantages of faster convergence, better stability and higher accuracy. Finally, experimental results validate the effectiveness and superiority of the proposed method. PMID:28383505

  20. FastICA peel-off for ECG interference removal from surface EMG.

    PubMed

    Chen, Maoqi; Zhang, Xu; Chen, Xiang; Zhu, Mingxing; Li, Guanglin; Zhou, Ping

    2016-06-13

    Multi-channel recording of surface electromyographyic (EMG) signals is very likely to be contaminated by electrocardiographic (ECG) interference, specifically when the surface electrode is placed on muscles close to the heart. A novel fast independent component analysis (FastICA) based peel-off method is presented to remove ECG interference contaminating multi-channel surface EMG signals. Although demonstrating spatial variability in waveform shape, the ECG interference in different channels shares the same firing instants. Utilizing the firing information estimated from FastICA, ECG interference can be separated from surface EMG by a "peel off" processing. The performance of the method was quantified with synthetic signals by combining a series of experimentally recorded "clean" surface EMG and "pure" ECG interference. It was demonstrated that the new method can remove ECG interference efficiently with little distortion to surface EMG amplitude and frequency. The proposed method was also validated using experimental surface EMG signals contaminated by ECG interference. The proposed FastICA peel-off method can be used as a new and practical solution to eliminating ECG interference from multichannel EMG recordings.

  1. 75 FR 56059 - Patent Examiner Technical Training Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-15

    ...); statistical methods in validation of microarry data; personalized medicine, manufacture of carbon nanospheres... processing, growing monocrystals, hydrogen production, liquid and gas purification and separation, making... Systems and Components: Mixed signal design and architecture, flexible displays, OLED display technology...

  2. Digital Sequences and a Time Reversal-Based Impact Region Imaging and Localization Method

    PubMed Central

    Qiu, Lei; Yuan, Shenfang; Mei, Hanfei; Qian, Weifeng

    2013-01-01

    To reduce time and cost of damage inspection, on-line impact monitoring of aircraft composite structures is needed. A digital monitor based on an array of piezoelectric transducers (PZTs) is developed to record the impact region of impacts on-line. It is small in size, lightweight and has low power consumption, but there are two problems with the impact alarm region localization method of the digital monitor at the current stage. The first one is that the accuracy rate of the impact alarm region localization is low, especially on complex composite structures. The second problem is that the area of impact alarm region is large when a large scale structure is monitored and the number of PZTs is limited which increases the time and cost of damage inspections. To solve the two problems, an impact alarm region imaging and localization method based on digital sequences and time reversal is proposed. In this method, the frequency band of impact response signals is estimated based on the digital sequences first. Then, characteristic signals of impact response signals are constructed by sinusoidal modulation signals. Finally, the phase synthesis time reversal impact imaging method is adopted to obtain the impact region image. Depending on the image, an error ellipse is generated to give out the final impact alarm region. A validation experiment is implemented on a complex composite wing box of a real aircraft. The validation results show that the accuracy rate of impact alarm region localization is approximately 100%. The area of impact alarm region can be reduced and the number of PZTs needed to cover the same impact monitoring region is reduced by more than a half. PMID:24084123

  3. Life-threatening false alarm rejection in ICU: using the rule-based and multi-channel information fusion method.

    PubMed

    Liu, Chengyu; Zhao, Lina; Tang, Hong; Li, Qiao; Wei, Shoushui; Li, Jianqing

    2016-08-01

    False alarm (FA) rates as high as 86% have been reported in intensive care unit monitors. High FA rates decrease quality of care by slowing staff response times while increasing patient burdens and stresses. In this study, we proposed a rule-based and multi-channel information fusion method for accurately classifying the true or false alarms for five life-threatening arrhythmias: asystole (ASY), extreme bradycardia (EBR), extreme tachycardia (ETC), ventricular tachycardia (VTA) and ventricular flutter/fibrillation (VFB). The proposed method consisted of five steps: (1) signal pre-processing, (2) feature detection and validation, (3) true/false alarm determination for each channel, (4) 'real-time' true/false alarm determination and (5) 'retrospective' true/false alarm determination (if needed). Up to four signal channels, that is, two electrocardiogram signals, one arterial blood pressure and/or one photoplethysmogram signal were included in the analysis. Two events were set for the method validation: event 1 for 'real-time' and event 2 for 'retrospective' alarm classification. The results showed that 100% true positive ratio (i.e. sensitivity) on the training set were obtained for ASY, EBR, ETC and VFB types, and 94% for VTA type, accompanied by the corresponding true negative ratio (i.e. specificity) results of 93%, 81%, 78%, 85% and 50% respectively, resulting in the score values of 96.50, 90.70, 88.89, 92.31 and 64.90, as well as with a final score of 80.57 for event 1 and 79.12 for event 2. For the test set, the proposed method obtained the score of 88.73 for ASY, 77.78 for EBR, 89.92 for ETC, 67.74 for VFB and 61.04 for VTA types, with the final score of 71.68 for event 1 and 75.91 for event 2.

  4. Novel pure component contribution, mean centering of ratio spectra and factor based algorithms for simultaneous resolution and quantification of overlapped spectral signals: An application to recently co-formulated tablets of chlorzoxazone, aceclofenac and paracetamol

    NASA Astrophysics Data System (ADS)

    Toubar, Safaa S.; Hegazy, Maha A.; Elshahed, Mona S.; Helmy, Marwa I.

    2016-06-01

    In this work, resolution and quantitation of spectral signals are achieved by several univariate and multivariate techniques. The novel pure component contribution algorithm (PCCA) along with mean centering of ratio spectra (MCR) and the factor based partial least squares (PLS) algorithms were developed for simultaneous determination of chlorzoxazone (CXZ), aceclofenac (ACF) and paracetamol (PAR) in their pure form and recently co-formulated tablets. The PCCA method allows the determination of each drug at its λmax. While, the mean centered values at 230, 302 and 253 nm, were used for quantification of CXZ, ACF and PAR, respectively, by MCR method. Partial least-squares (PLS) algorithm was applied as a multivariate calibration method. The three methods were successfully applied for determination of CXZ, ACF and PAR in pure form and tablets. Good linear relationships were obtained in the ranges of 2-50, 2-40 and 2-30 μg mL- 1 for CXZ, ACF and PAR, in order, by both PCCA and MCR, while the PLS model was built for the three compounds each in the range of 2-10 μg mL- 1. The results obtained from the proposed methods were statistically compared with a reported one. PCCA and MCR methods were validated according to ICH guidelines, while PLS method was validated by both cross validation and an independent data set. They are found suitable for the determination of the studied drugs in bulk powder and tablets.

  5. A surface acoustic wave response detection method for passive wireless torque sensor

    NASA Astrophysics Data System (ADS)

    Fan, Yanping; Kong, Ping; Qi, Hongli; Liu, Hongye; Ji, Xiaojun

    2018-01-01

    This paper presents an effective surface acoustic wave (SAW) response detection method for the passive wireless SAW torque sensor to improve the measurement accuracy. An analysis was conducted on the relationship between the response energy-entropy and the bandwidth of SAW resonator (SAWR). A self-correlation method was modified to suppress the blurred white noise and highlight the attenuation characteristic of wireless SAW response. The SAW response was detected according to both the variation and the duration of energy-entropy ascension of an acquired RF signal. Numerical simulation results showed that the SAW response can be detected even when the signal-to-noise ratio (SNR) is 6dB. The proposed SAW response detection method was evaluated with several experiments at different conditions. The SAW response can be well distinguished from the sinusoidal signal and the noise. The performance of the SAW torque measurement system incorporating the detection method was tested. The obtained repeatability error was 0.23% and the linearity was 0.9934, indicating the validity of the detection method.

  6. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  7. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  8. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  9. Unconstrained and Noninvasive Measurement of Swimming Behavior of Small Fish Based on Ventilatory Signals

    NASA Astrophysics Data System (ADS)

    Kitayama, Shigehisa; Soh, Zu; Hirano, Akira; Tsuji, Toshio; Takiguchi, Noboru; Ohtake, Hisao

    Ventilatory signal is a kind of bioelectric signals reflecting the ventilatory conditions of fish, and has received recent attention as an indicator for assessment of water quality, since breathing is adjusted by the respiratory center according to changes in the underwater environment surrounding the fish. The signals are thus beginning to be used in bioassay systems for water examination. Other than ventilatory conditions, swimming behavior also contains important information for water examination. The conventional bioassay systems, however, only measure either ventilatory signals or swimming behavior. This paper proposes a new unconstrained and noninvasive measurement method that is capable of conducting ventilatory signal measurement and behavioral analysis of fish at the same time. The proposed method estimates the position and the velocity of a fish in free-swimming conditions using power spectrum distribution of measured ventilatory signals from multiple electrodes. This allowed the system to avoid using a camera system which requires light sources. In order to validate estimation accuracy, the position and the velocity estimated by the proposed method were compared to those obtained from video analysis. The results confirmed that the estimated error of the fish positions was within the size of fish, and the correlation coefficient between the velocities was 0.906. The proposed method thus not only can measure the ventilatory signals, but also performs behavioral analysis as accurate as using a video camera.

  10. Detection of interference phase by digital computation of quadrature signals in homodyne laser interferometry.

    PubMed

    Rerucha, Simon; Buchta, Zdenek; Sarbort, Martin; Lazar, Josef; Cip, Ondrej

    2012-10-19

    We have proposed an approach to the interference phase extraction in the homodyne laser interferometry. The method employs a series of computational steps to reconstruct the signals for quadrature detection from an interference signal from a non-polarising interferometer sampled by a simple photodetector. The complexity trade-off is the use of laser beam with frequency modulation capability. It is analytically derived and its validity and performance is experimentally verified. The method has proven to be a feasible alternative for the traditional homodyne detection since it performs with comparable accuracy, especially where the optical setup complexity is principal issue and the modulation of laser beam is not a heavy burden (e.g., in multi-axis sensor or laser diode based systems).

  11. Guided wave imaging of oblique reflecting interfaces in pipes using common-source synthetic focusing

    NASA Astrophysics Data System (ADS)

    Sun, Zeqing; Sun, Anyu; Ju, Bing-Feng

    2018-04-01

    Cross-mode-family mode conversion and secondary reflection of guided waves in pipes complicate the processing of guided waves signals, and can cause false detection. In this paper, filters operating in the spectral domain of wavenumber, circumferential order and frequency are designed to suppress the signal components of unwanted mode-family and unwanted traveling direction. Common-source synthetic focusing is used to reconstruct defect images from the guided wave signals. Simulations of the reflections from linear oblique defects and a semicircle defect are separately implemented. Defect images, which are reconstructed from the simulation results under different excitation conditions, are comparatively studied in terms of axial resolution, reflection amplitude, detectable oblique angle and so on. Further, the proposed method is experimentally validated by detecting linear cracks with various oblique angles (10-40°). The proposed method relies on the guided wave signals that are captured during 2-D scanning of a cylindrical area on the pipe. The redundancy of the signals is analyzed to reduce the time-consumption of the scanning process and to enhance the practicability of the proposed method.

  12. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach.

    PubMed

    Xu, Nan; Spreng, R Nathan; Doerschuk, Peter C

    2017-01-01

    Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the "common driver" problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.

  13. Two-dimensional interferometric Rayleigh scattering velocimetry using multibeam probe laser

    NASA Astrophysics Data System (ADS)

    Sheng, Wang; Jin-Hai, Si; Jun, Shao; Zhi-yun, Hu; Jing-feng, Ye; Jing-Ru, Liu

    2017-11-01

    In order to achieve the two-dimensional (2-D) velocity measurement of a flow field at extreme condition, a 2-D interferometric Rayleigh scattering (IRS) velocimetry using a multibeam probe laser was developed. The method using a multibeam probe laser can record the reference interference signal and the flow interference signal simultaneously. What is more, this method can solve the problem of signal overlap using the laser sheet detection method. The 2-D IRS measurement system was set up with a multibeam probe laser, aspherical lens collection optics, and a solid Fabry-Perot etalon. A multibeam probe laser with 0.5-mm intervals was formed by collimating a laser sheet passing through a cylindrical microlens arrays. The aspherical lens was used to enhance the intensity of the Rayleigh scattering signal. The 2-D velocity field results of a Mach 1.5 air flow were obtained. The velocity in the flow center is about 450 m/s. The reconstructed results fit well with the characteristic of flow, which indicate the validity of this technique.

  14. A method for the automatic reconstruction of fetal cardiac signals from magnetocardiographic recordings

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Alleva, G.; Comani, S.

    2005-10-01

    Fetal magnetocardiography (fMCG) allows monitoring the fetal heart function through algorithms able to retrieve the fetal cardiac signal, but no standardized automatic model has become available so far. In this paper, we describe an automatic method that restores the fetal cardiac trace from fMCG recordings by means of a weighted summation of fetal components separated with independent component analysis (ICA) and identified through dedicated algorithms that analyse the frequency content and temporal structure of each source signal. Multichannel fMCG datasets of 66 healthy and 4 arrhythmic fetuses were used to validate the automatic method with respect to a classical procedure requiring the manual classification of fetal components by an expert investigator. ICA was run with input clusters of different dimensions to simulate various MCG systems. Detection rates, true negative and false positive component categorization, QRS amplitude, standard deviation and signal-to-noise ratio of reconstructed fetal signals, and real and per cent QRS differences between paired fetal traces retrieved automatically and manually were calculated to quantify the performances of the automatic method. Its robustness and reliability, particularly evident with the use of large input clusters, might increase the diagnostic role of fMCG during the prenatal period.

  15. Ultrasensitive NIR-SERRS Probes with Multiplexed Ratiometric Quantification for In Vivo Antibody Leads Validation.

    PubMed

    Kang, Homan; Jeong, Sinyoung; Jo, Ahla; Chang, Hyejin; Yang, Jin-Kyoung; Jeong, Cheolhwan; Kyeong, San; Lee, Youn Woo; Samanta, Animesh; Maiti, Kaustabh Kumar; Cha, Myeong Geun; Kim, Taek-Keun; Lee, Sukmook; Jun, Bong-Hyun; Chang, Young-Tae; Chung, Junho; Lee, Ho-Young; Jeong, Dae Hong; Lee, Yoon-Sik

    2018-02-01

    Immunotargeting ability of antibodies may show significant difference between in vitro and in vivo. To select antibody leads with high affinity and specificity, it is necessary to perform in vivo validation of antibody candidates following in vitro antibody screening. Herein, a robust in vivo validation of anti-tetraspanin-8 antibody candidates against human colon cancer using ratiometric quantification method is reported. The validation is performed on a single mouse and analyzed by multiplexed surface-enhanced Raman scattering using ultrasensitive and near infrared (NIR)-active surface-enhanced resonance Raman scattering nanoprobes (NIR-SERRS dots). The NIR-SERRS dots are composed of NIR-active labels and Au/Ag hollow-shell assembled silica nanospheres. A 93% of NIR-SERRS dots is detectable at a single-particle level and signal intensity is 100-fold stronger than that from nonresonant molecule-labeled spherical Au NPs (80 nm). The result of SERRS-based antibody validation is comparable to that of the conventional method using single-photon-emission computed tomography. The NIR-SERRS-based strategy is an alternate validation method which provides cost-effective and accurate multiplexing measurements for antibody-based drug development. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Derivation of respiration rate from ambulatory ECG and PPG using Ensemble Empirical Mode Decomposition: Comparison and fusion.

    PubMed

    Orphanidou, Christina

    2017-02-01

    A new method for extracting the respiratory rate from ECG and PPG obtained via wearable sensors is presented. The proposed technique employs Ensemble Empirical Mode Decomposition in order to identify the respiration "mode" from the noise-corrupted Heart Rate Variability/Pulse Rate Variability and Amplitude Modulation signals extracted from ECG and PPG signals. The technique was validated with respect to a Respiratory Impedance Pneumography (RIP) signal using the mean absolute and the average relative errors for a group ambulatory hospital patients. We compared approaches using single respiration-induced modulations on the ECG and PPG signals with approaches fusing the different modulations. Additionally, we investigated whether the presence of both the simultaneously recorded ECG and PPG signals provided a benefit in the overall system performance. Our method outperformed state-of-the-art ECG- and PPG-based algorithms and gave the best results over the whole database with a mean error of 1.8bpm for 1min estimates when using the fused ECG modulations, which was a relative error of 10.3%. No statistically significant differences were found when comparing the ECG-, PPG- and ECG/PPG-based approaches, indicating that the PPG can be used as a valid alternative to the ECG for applications using wearable sensors. While the presence of both the ECG and PPG signals did not provide an improvement in the estimation error, it increased the proportion of windows for which an estimate was obtained by at least 9%, indicating that the use of two simultaneously recorded signals might be desirable in high-acuity cases where an RR estimate is required more frequently. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. A Blade Tip Timing Method Based on a Microwave Sensor

    PubMed Central

    Zhang, Jilong; Duan, Fajie; Niu, Guangyue; Jiang, Jiajia; Li, Jie

    2017-01-01

    Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy. PMID:28492469

  18. A Blade Tip Timing Method Based on a Microwave Sensor.

    PubMed

    Zhang, Jilong; Duan, Fajie; Niu, Guangyue; Jiang, Jiajia; Li, Jie

    2017-05-11

    Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.

  19. Comprehensive tire-road friction coefficient estimation based on signal fusion method under complex maneuvering operations

    NASA Astrophysics Data System (ADS)

    Li, L.; Yang, K.; Jia, G.; Ran, X.; Song, J.; Han, Z.-Q.

    2015-05-01

    The accurate estimation of the tire-road friction coefficient plays a significant role in the vehicle dynamics control. The estimation method should be timely and reliable for the controlling requirements, which means the contact friction characteristics between the tire and the road should be recognized before the interference to ensure the safety of the driver and passengers from drifting and losing control. In addition, the estimation method should be stable and feasible for complex maneuvering operations to guarantee the control performance as well. A signal fusion method combining the available signals to estimate the road friction is suggested in this paper on the basis of the estimated ones of braking, driving and steering conditions individually. Through the input characteristics and the states of the vehicle and tires from sensors the maneuvering condition may be recognized, by which the certainty factors of the friction of the three conditions mentioned above may be obtained correspondingly, and then the comprehensive road friction may be calculated. Experimental vehicle tests validate the effectiveness of the proposed method through complex maneuvering operations; the estimated road friction coefficient based on the signal fusion method is relatively timely and accurate to satisfy the control demands.

  20. Multiscale approach to the determination of the photoactive yellow protein signaling state ensemble.

    PubMed

    A Rohrdanz, Mary; Zheng, Wenwei; Lambeth, Bradley; Vreede, Jocelyne; Clementi, Cecilia

    2014-10-01

    The nature of the optical cycle of photoactive yellow protein (PYP) makes its elucidation challenging for both experiment and theory. The long transition times render conventional simulation methods ineffective, and yet the short signaling-state lifetime makes experimental data difficult to obtain and interpret. Here, through an innovative combination of computational methods, a prediction and analysis of the biological signaling state of PYP is presented. Coarse-grained modeling and locally scaled diffusion map are first used to obtain a rough bird's-eye view of the free energy landscape of photo-activated PYP. Then all-atom reconstruction, followed by an enhanced sampling scheme; diffusion map-directed-molecular dynamics are used to focus in on the signaling-state region of configuration space and obtain an ensemble of signaling state structures. To the best of our knowledge, this is the first time an all-atom reconstruction from a coarse grained model has been performed in a relatively unexplored region of molecular configuration space. We compare our signaling state prediction with previous computational and more recent experimental results, and the comparison is favorable, which validates the method presented. This approach provides additional insight to understand the PYP photo cycle, and can be applied to other systems for which more direct methods are impractical.

  1. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-08-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  2. Aircraft signal definition for flight safety system monitoring system

    NASA Technical Reports Server (NTRS)

    Gibbs, Michael (Inventor); Omen, Debi Van (Inventor)

    2003-01-01

    A system and method compares combinations of vehicle variable values against known combinations of potentially dangerous vehicle input signal values. Alarms and error messages are selectively generated based on such comparisons. An aircraft signal definition is provided to enable definition and monitoring of sets of aircraft input signals to customize such signals for different aircraft. The input signals are compared against known combinations of potentially dangerous values by operational software and hardware of a monitoring function. The aircraft signal definition is created using a text editor or custom application. A compiler receives the aircraft signal definition to generate a binary file that comprises the definition of all the input signals used by the monitoring function. The binary file also contains logic that specifies how the inputs are to be interpreted. The file is then loaded into the monitor function, where it is validated and used to continuously monitor the condition of the aircraft.

  3. A signal processing based analysis and prediction of seizure onset in patients with epilepsy

    PubMed Central

    Namazi, Hamidreza; Kulish, Vladimir V.

    2016-01-01

    One of the main areas of behavioural neuroscience is forecasting the human behaviour. Epilepsy is a central nervous system disorder in which nerve cell activity in the brain becomes disrupted, causing seizures or periods of unusual behaviour, sensations and sometimes loss of consciousness. An estimated 5% of the world population has epileptic seizure but there is not any method to cure it. More than 30% of people with epilepsy cannot control seizure. Epileptic seizure prediction, refers to forecasting the occurrence of epileptic seizures, is one of the most important but challenging problems in biomedical sciences, across the world. In this research we propose a new methodology which is based on studying the EEG signals using two measures, the Hurst exponent and fractal dimension. In order to validate the proposed method, it is applied to epileptic EEG signals of patients by computing the Hurst exponent and fractal dimension, and then the results are validated versus the reference data. The results of these analyses show that we are able to forecast the onset of a seizure on average of 25.76 seconds before the time of occurrence. PMID:26586477

  4. An energy ratio feature extraction method for optical fiber vibration signal

    NASA Astrophysics Data System (ADS)

    Sheng, Zhiyong; Zhang, Xinyan; Wang, Yanping; Hou, Weiming; Yang, Dan

    2018-03-01

    The intrusion events in the optical fiber pre-warning system (OFPS) are divided into two types which are harmful intrusion event and harmless interference event. At present, the signal feature extraction methods of these two types of events are usually designed from the view of the time domain. However, the differences of time-domain characteristics for different harmful intrusion events are not obvious, which cannot reflect the diversity of them in detail. We find that the spectrum distribution of different intrusion signals has obvious differences. For this reason, the intrusion signal is transformed into the frequency domain. In this paper, an energy ratio feature extraction method of harmful intrusion event is drawn on. Firstly, the intrusion signals are pre-processed and the power spectral density (PSD) is calculated. Then, the energy ratio of different frequency bands is calculated, and the corresponding feature vector of each type of intrusion event is further formed. The linear discriminant analysis (LDA) classifier is used to identify the harmful intrusion events in the paper. Experimental results show that the algorithm improves the recognition rate of the intrusion signal, and further verifies the feasibility and validity of the algorithm.

  5. Dynamic Time Warping compared to established methods for validation of musculoskeletal models.

    PubMed

    Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael

    2017-04-11

    By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.

    PubMed

    Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko

    2017-07-01

    Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.

  7. An estimation method for echo signal energy of pipe inner surface longitudinal crack detection by 2-D energy coefficients integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shiyuan, E-mail: redaple@bit.edu.cn; Sun, Haoyu, E-mail: redaple@bit.edu.cn; Xu, Chunguang, E-mail: redaple@bit.edu.cn

    The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of “energy coefficient” in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energymore » coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.« less

  8. An estimation method for echo signal energy of pipe inner surface longitudinal crack detection by 2-D energy coefficients integration

    NASA Astrophysics Data System (ADS)

    Zhou, Shiyuan; Sun, Haoyu; Xu, Chunguang; Cao, Xiandong; Cui, Liming; Xiao, Dingguo

    2015-03-01

    The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of "energy coefficient" in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energy coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.

  9. A fast estimation of shock wave pressure based on trend identification

    NASA Astrophysics Data System (ADS)

    Yao, Zhenjian; Wang, Zhongyu; Wang, Chenchen; Lv, Jing

    2018-04-01

    In this paper, a fast method based on trend identification is proposed to accurately estimate the shock wave pressure in a dynamic measurement. Firstly, the collected output signal of the pressure sensor is reconstructed by discrete cosine transform (DCT) to reduce the computational complexity for the subsequent steps. Secondly, the empirical mode decomposition (EMD) is applied to decompose the reconstructed signal into several components with different frequency-bands, and the last few low-frequency components are chosen to recover the trend of the reconstructed signal. In the meantime, the optimal component number is determined based on the correlation coefficient and the normalized Euclidean distance between the trend and the reconstructed signal. Thirdly, with the areas under the gradient curve of the trend signal, the stable interval that produces the minimum can be easily identified. As a result, the stable value of the output signal is achieved in this interval. Finally, the shock wave pressure can be estimated according to the stable value of the output signal and the sensitivity of the sensor in the dynamic measurement. A series of shock wave pressure measurements are carried out with a shock tube system to validate the performance of this method. The experimental results show that the proposed method works well in shock wave pressure estimation. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing approaches in both estimation accuracy and computational efficiency.

  10. Sinusoidal synthesis based adaptive tracking for rotating machinery fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    This paper presents a novel Sinusoidal Synthesis Based Adaptive Tracking (SSBAT) technique for vibration-based rotating machinery fault detection. The proposed SSBAT algorithm is an adaptive time series technique that makes use of both frequency and time domain information of vibration signals. Such information is incorporated in a time varying dynamic model. Signal tracking is then realized by applying adaptive sinusoidal synthesis to the vibration signal. A modified Least-Squares (LS) method is adopted to estimate the model parameters. In addition to tracking, the proposed vibration synthesis model is mainly used as a linear time-varying predictor. The health condition of the rotating machine is monitored by checking the residual between the predicted and measured signal. The SSBAT method takes advantage of the sinusoidal nature of vibration signals and transfers the nonlinear problem into a linear adaptive problem in the time domain based on a state-space realization. It has low computation burden and does not need a priori knowledge of the machine under the no-fault condition which makes the algorithm ideal for on-line fault detection. The method is validated using both numerical simulation and practical application data. Meanwhile, the fault detection results are compared with the commonly adopted autoregressive (AR) and autoregressive Minimum Entropy Deconvolution (ARMED) method to verify the feasibility and performance of the SSBAT method.

  11. Using Lunar Observations to Validate In-Flight Calibrations of Clouds and Earth Radiant Energy System Instruments

    NASA Technical Reports Server (NTRS)

    Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan

    2014-01-01

    The validation of in-orbit instrument performance requires stability in both instrument and calibration source. This paper describes a method of validation using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. Unlike internal calibrations, the Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, in-orbit observations have become standardized and compiled for the Flight Models-1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance parameters which can be gleaned are detector gain, pointing accuracy and static detector point response function validation. Lunar observations are used to examine the stability of all three detectors on each of these instruments from 2006 to present. This validation method has yielded results showing trends per CERES data channel of 1.2% per decade or less.

  12. Parametric adaptive filtering and data validation in the bar GW detector AURIGA

    NASA Astrophysics Data System (ADS)

    Ortolan, A.; Baggio, L.; Cerdonio, M.; Prodi, G. A.; Vedovato, G.; Vitale, S.

    2002-04-01

    We report on our experience gained in the signal processing of the resonant GW detector AURIGA. Signal amplitude and arrival time are estimated by means of a matched-adaptive Wiener filter. The detector noise, entering in the filter set-up, is modelled as a parametric ARMA process; to account for slow non-stationarity of the noise, the ARMA parameters are estimated on an hourly basis. A requirement of the set-up of an unbiased Wiener filter is the separation of time spans with 'almost Gaussian' noise from non-Gaussian and/or strongly non-stationary time spans. The separation algorithm consists basically of a variance estimate with the Chauvenet convergence method and a threshold on the Curtosis index. The subsequent validation of data is strictly connected with the separation procedure: in fact, by injecting a large number of artificial GW signals into the 'almost Gaussian' part of the AURIGA data stream, we have demonstrated that the effective probability distributions of the signal-to-noise ratio χ2 and the time of arrival are those that are expected.

  13. The Principle of the Micro-Electronic Neural Bridge and a Prototype System Design.

    PubMed

    Huang, Zong-Hao; Wang, Zhi-Gong; Lu, Xiao-Ying; Li, Wen-Yuan; Zhou, Yu-Xuan; Shen, Xiao-Yan; Zhao, Xin-Tai

    2016-01-01

    The micro-electronic neural bridge (MENB) aims to rebuild lost motor function of paralyzed humans by routing movement-related signals from the brain, around the damage part in the spinal cord, to the external effectors. This study focused on the prototype system design of the MENB, including the principle of the MENB, the neural signal detecting circuit and the functional electrical stimulation (FES) circuit design, and the spike detecting and sorting algorithm. In this study, we developed a novel improved amplitude threshold spike detecting method based on variable forward difference threshold for both training and bridging phase. The discrete wavelet transform (DWT), a new level feature coefficient selection method based on Lilliefors test, and the k-means clustering method based on Mahalanobis distance were used for spike sorting. A real-time online spike detecting and sorting algorithm based on DWT and Euclidean distance was also implemented for the bridging phase. Tested by the data sets available at Caltech, in the training phase, the average sensitivity, specificity, and clustering accuracies are 99.43%, 97.83%, and 95.45%, respectively. Validated by the three-fold cross-validation method, the average sensitivity, specificity, and classification accuracy are 99.43%, 97.70%, and 96.46%, respectively.

  14. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    PubMed

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  15. Fourier transform of delayed fluorescence as an indicator of herbicide concentration.

    PubMed

    Guo, Ya; Tan, Jinglu

    2014-12-21

    It is well known that delayed fluorescence (DF) from Photosystem II (PSII) of plant leaves can be potentially used to sense herbicide pollution and evaluate the effect of herbicides on plant leaves. The research of using DF as a measure of herbicides in the literature was mainly conducted in time domain and qualitative correlation was often obtained. Fourier transform is often used to analyze signals. Viewing DF signal in frequency domain through Fourier transform may allow separation of signal components and provide a quantitative method for sensing herbicides. However, there is a lack of an attempt to use Fourier transform of DF as an indicator of herbicide. In this work, the relationship between the Fourier transform of DF and herbicide concentration was theoretically modelled and analyzed, which immediately yielded a quantitative method to measure herbicide concentration in frequency domain. Experiments were performed to validate the developed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. [Research on Detection Method with Wearable Respiration Device Based on the Theory of Bio-impedance].

    PubMed

    Liu, Guangda; Wang, Xianzhong; Cai, Jing; Wang, Wei; Zha, Yutong

    2016-12-01

    Considering the importance of the human respiratory signal detection and based on the Cole-Cole bio-impedance model,we developed a wearable device for detecting human respiratory signal.The device can be used to analyze the impedance characteristics of human body at different frequencies based on the bio-impedance theory.The device is also based on the method of proportion measurement to design a high signal to noise ratio(SNR)circuit to get human respiratory signal.In order to obtain the waveform of the respiratory signal and the value of the respiration rate,we used the techniques of discrete Fourier transform(DFT)and dynamic difference threshold peak detection.Experiments showed that this system was valid,and we could see that it could accurately detect the waveform of respiration and the detection accuracy rate of respiratory wave peak point detection results was over 98%.So it can meet the needs of the actual breath test.

  17. Identification of Mobile Phone and Analysis of Original Version of Videos through a Delay Time Analysis of Sound Signals from Mobile Phone Videos.

    PubMed

    Hwang, Min Gu; Har, Dong Hwan

    2017-11-01

    This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.

  18. Wavelet packet-based insufficiency murmurs analysis method

    NASA Astrophysics Data System (ADS)

    Choi, Samjin; Jiang, Zhongwei

    2007-12-01

    In this paper, the aortic and mitral insufficiency murmurs analysis method using the wavelet packet technique is proposed for classifying the valvular heart defects. Considering the different frequency distributions between the normal sound and insufficiency murmurs in frequency domain, we used two properties such as the relative wavelet energy and the Shannon wavelet entropy which described the energy information and the entropy information at the selected frequency band, respectively. Then, the signal to murmur ratio (SMR) measures which could mean the ratio between the frequency bands for normal heart sounds and for aortic and mitral insufficiency murmurs allocated to 15.62-187.50 Hz and 187.50-703.12 Hz respectively, were employed as a classification manner to identify insufficiency murmurs. The proposed measures were validated by some case studies. The 194 heart sound signals with 48 normal and 146 abnormal sound cases acquired from 6 healthy volunteers and 30 patients were tested. The normal sound signals recorded by applying a self-produced wireless electric stethoscope system to subjects with no history of other heart complications were used. Insufficiency murmurs were grouped into two valvular heart defects such as aortic insufficiency and mitral insufficiency. These murmur subjects included no other coexistent valvular defects. As a result, the proposed insufficiency murmurs detection method showed relatively very high classification efficiency. Therefore, the proposed heart sound classification method based on the wavelet packet was validated for the classification of valvular heart defects, especially insufficiency murmurs.

  19. Quantitative analysis of drug distribution by ambient mass spectrometry imaging method with signal extinction normalization strategy and inkjet-printing technology.

    PubMed

    Luo, Zhigang; He, Jingjing; He, Jiuming; Huang, Lan; Song, Xiaowei; Li, Xin; Abliz, Zeper

    2018-03-01

    Quantitative mass spectrometry imaging (MSI) is a robust approach that provides both quantitative and spatial information for drug candidates' research. However, because of complicated signal suppression and interference, acquiring accurate quantitative information from MSI data remains a challenge, especially for whole-body tissue sample. Ambient MSI techniques using spray-based ionization appear to be ideal for pharmaceutical quantitative MSI analysis. However, it is more challenging, as it involves almost no sample preparation and is more susceptible to ion suppression/enhancement. Herein, based on our developed air flow-assisted desorption electrospray ionization (AFADESI)-MSI technology, an ambient quantitative MSI method was introduced by integrating inkjet-printing technology with normalization of the signal extinction coefficient (SEC) using the target compound itself. The method utilized a single calibration curve to quantify multiple tissue types. Basic blue 7 and an antitumor drug candidate (S-(+)-deoxytylophorinidine, CAT) were chosen to initially validate the feasibility and reliability of the quantitative MSI method. Rat tissue sections (heart, kidney, and brain) administered with CAT was then analyzed. The quantitative MSI analysis results were cross-validated by LC-MS/MS analysis data of the same tissues. The consistency suggests that the approach is able to fast obtain the quantitative MSI data without introducing interference into the in-situ environment of the tissue sample, and is potential to provide a high-throughput, economical and reliable approach for drug discovery and development. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Suppression of stimulus artifact contaminating electrically evoked electromyography.

    PubMed

    Liu, Jie; Li, Sheng; Li, Xiaoyan; Klein, Cliff; Rymer, William Z; Zhou, Ping

    2014-01-01

    Electrical stimulation of muscle or nerve is a very useful technique for understanding of muscle activity and its pathological changes for both diagnostic and therapeutic purposes. During electrical stimulation of a muscle, the recorded M wave is often contaminated by a stimulus artifact. The stimulus artifact must be removed for appropriate analysis and interpretation of M waves. The objective of this study was to develop a novel software based method to remove stimulus artifacts contaminating or superimposing with electrically evoked surface electromyography (EMG) or M wave signals. The multiple stage method uses a series of signal processing techniques, including highlighting and detection of stimulus artifacts using Savitzky-Golay filtering, estimation of the artifact contaminated region with Otsu thresholding, and reconstruction of such region using signal interpolation and smoothing. The developed method was tested using M wave signals recorded from biceps brachii muscles by a linear surface electrode array. To evaluate the performance, a series of semi-synthetic signals were constructed from clean M wave and stimulus artifact recordings with different degrees of overlap between them. The effectiveness of the developed method was quantified by a significant increase in correlation coefficient and a significant decrease in root mean square error between the clean M wave and the reconstructed M wave, compared with those between the clean M wave and the originally contaminated signal. The validity of the developed method was also demonstrated when tested on each channel's M wave recording using a linear electrode array. The developed method can suppress stimulus artifacts contaminating M wave recordings.

  1. Long-term recording and automatic analysis of cough using filtered acoustic signals and movements on static charge sensitive bed.

    PubMed

    Salmi, T; Sovijärvi, A R; Brander, P; Piirilä, P

    1988-11-01

    Reliable long-term assessment of cough is necessary in many clinical and scientific settings. A new method for long-term recording and automatic analysis of cough is presented. The method is based on simultaneous recording of two independent signals: high-pass filtered cough sounds and cough-induced fast movements of the body. The acoustic signals are recorded with a dynamic microphone in the acoustic focus of a glass fiber paraboloid mirror. Body movements are recorded with a static charge-sensitive bed located under an ordinary plastic foam mattress. The patient can be studied lying or sitting with no transducers or electrodes attached. A microcomputer is used for sampling of signals, detection of cough, statistical analyses, and on-line printing of results. The method was validated in seven adult patients with a total of 809 spontaneous cough events, using clinical observation as a reference. The sensitivity of the method to detect cough was 99.0 percent, and the positive predictivity was 98.1 percent. The system ignored speaking and snoring. The method provides a convenient means of reliable long-term follow-up of cough in clinical work and research.

  2. Kinetics of T-cell receptor-dependent antigen recognition determined in vivo by multi-spectral normalized epifluorescence laser scanning

    NASA Astrophysics Data System (ADS)

    Favicchio, Rosy; Zacharakis, Giannis; Oikonomaki, Katerina; Zacharopoulos, Athanasios; Mamalaki, Clio; Ripoll, Jorge

    2012-07-01

    Detection of multiple fluorophores in conditions of low signal represents a limiting factor for the application of in vivo optical imaging techniques in immunology where fluorescent labels report for different functional characteristics. A noninvasive in vivo Multi-Spectral Normalized Epifluorescence Laser scanning (M-SNELS) method was developed for the simultaneous and quantitative detection of multiple fluorophores in low signal to noise ratios and used to follow T-cell activation and clonal expansion. Colocalized DsRed- and GFP-labeled T cells were followed in tandem during the mounting of an immune response. Spectral unmixing was used to distinguish the overlapping fluorescent emissions representative of the two distinct cell populations and longitudinal data reported the discrete pattern of antigen-driven proliferation. Retrieved values were validated both in vitro and in vivo with flow cytometry and significant correlation between all methodologies was achieved. Noninvasive M-SNELS successfully quantified two colocalized fluorescent populations and provides a valid alternative imaging approach to traditional invasive methods for detecting T cell dynamics.

  3. Detection of Interference Phase by Digital Computation of Quadrature Signals in Homodyne Laser Interferometry

    PubMed Central

    Rerucha, Simon; Buchta, Zdenek; Sarbort, Martin; Lazar, Josef; Cip, Ondrej

    2012-01-01

    We have proposed an approach to the interference phase extraction in the homodyne laser interferometry. The method employs a series of computational steps to reconstruct the signals for quadrature detection from an interference signal from a non-polarising interferometer sampled by a simple photodetector. The complexity trade-off is the use of laser beam with frequency modulation capability. It is analytically derived and its validity and performance is experimentally verified. The method has proven to be a feasible alternative for the traditional homodyne detection since it performs with comparable accuracy, especially where the optical setup complexity is principal issue and the modulation of laser beam is not a heavy burden (e.g., in multi-axis sensor or laser diode based systems). PMID:23202038

  4. A methodology for combustion detection in diesel engines through in-cylinder pressure derivative signal

    NASA Astrophysics Data System (ADS)

    Luján, José M.; Bermúdez, Vicente; Guardiola, Carlos; Abbad, Ali

    2010-10-01

    In-cylinder pressure measurement has historically been used for off-line combustion diagnosis, but online application for real-time combustion control has become of great interest. This work considers low computing-cost methods for analysing the instant variation of the chamber pressure, directly obtained from the electric signal provided by a traditional piezoelectric sensor. Presented methods are based on the detection of sudden changes in the chamber pressure, which are amplified by the pressure derivative, and which are due to thermodynamic phenomena within the cylinder. Signal analysis tools both in time and in time-frequency domains are used for detecting the start of combustion, the end of combustion and the heat release peak. Results are compared with classical thermodynamic analysis and validated in several turbocharged diesel engines.

  5. A mathematical framework to quantify the masking effect associated with the confidence intervals of measures of disproportionality

    PubMed Central

    Maignen, François; Hauben, Manfred; Dogné, Jean-Michel

    2017-01-01

    Background: The lower bound of the 95% confidence interval of measures of disproportionality (Lower95CI) is widely used in signal detection. Masking is a statistical issue by which true signals of disproportionate reporting are hidden by the presence of other medicines. The primary objective of our study is to develop and validate a mathematical framework for assessing the masking effect of Lower95CI. Methods: We have developed our new algorithm based on the masking ratio (MR) developed for the measures of disproportionality. A MR for the Lower95CI (MRCI) is proposed. A simulation study to validate this algorithm was also conducted. Results: We have established the existence of a very close mathematical relation between MR and MRCI. For a given drug–event pair, the same product will be responsible for the highest masking effect with the measure of disproportionality and its Lower95CI. The extent of masking is likely to be very similar across the two methods. An important proportion of identical drug–event associations affected by the presence of an important masking effect is revealed by the unmasking exercise, whether the proportional reporting ratio (PRR) or its confidence interval are used. Conclusion: The detection of the masking effect of Lower95CI can be automated. The real benefits of this unmasking in terms of new true-positive signals (rate of true-positive/false-positive) or time gained by the revealing of signals using this method have not been fully assessed. These benefits should be demonstrated in the context of prospective studies. PMID:28845231

  6. Concrete Condition Assessment Using Impact-Echo Method and Extreme Learning Machines

    PubMed Central

    Zhang, Jing-Kui; Yan, Weizhong; Cui, De-Mi

    2016-01-01

    The impact-echo (IE) method is a popular non-destructive testing (NDT) technique widely used for measuring the thickness of plate-like structures and for detecting certain defects inside concrete elements or structures. However, the IE method is not effective for full condition assessment (i.e., defect detection, defect diagnosis, defect sizing and location), because the simple frequency spectrum analysis involved in the existing IE method is not sufficient to capture the IE signal patterns associated with different conditions. In this paper, we attempt to enhance the IE technique and enable it for full condition assessment of concrete elements by introducing advanced machine learning techniques for performing comprehensive analysis and pattern recognition of IE signals. Specifically, we use wavelet decomposition for extracting signatures or features out of the raw IE signals and apply extreme learning machine, one of the recently developed machine learning techniques, as classification models for full condition assessment. To validate the capabilities of the proposed method, we build a number of specimens with various types, sizes, and locations of defects and perform IE testing on these specimens in a lab environment. Based on analysis of the collected IE signals using the proposed machine learning based IE method, we demonstrate that the proposed method is effective in performing full condition assessment of concrete elements or structures. PMID:27023563

  7. A non-uniformly under-sampled blade tip-timing signal reconstruction method for blade vibration monitoring.

    PubMed

    Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun

    2015-01-22

    High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.

  8. A Non-Uniformly Under-Sampled Blade Tip-Timing Signal Reconstruction Method for Blade Vibration Monitoring

    PubMed Central

    Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun

    2015-01-01

    High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612

  9. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  10. Development of an Itemwise Efficiency Scoring Method: Concurrent, Convergent, Discriminant, and Neuroimaging-Based Predictive Validity Assessed in a Large Community Sample

    PubMed Central

    Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.

    2016-01-01

    Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796

  11. Signal-Noise Identification of Magnetotelluric Signals Using Fractal-Entropy and Clustering Algorithm for Targeted De-Noising

    NASA Astrophysics Data System (ADS)

    Li, Jin; Zhang, Xian; Gong, Jinzhe; Tang, Jingtian; Ren, Zhengyong; Li, Guang; Deng, Yanli; Cai, Jin

    A new technique is proposed for signal-noise identification and targeted de-noising of Magnetotelluric (MT) signals. This method is based on fractal-entropy and clustering algorithm, which automatically identifies signal sections corrupted by common interference (square, triangle and pulse waves), enabling targeted de-noising and preventing the loss of useful information in filtering. To implement the technique, four characteristic parameters — fractal box dimension (FBD), higuchi fractal dimension (HFD), fuzzy entropy (FuEn) and approximate entropy (ApEn) — are extracted from MT time-series. The fuzzy c-means (FCM) clustering technique is used to analyze the characteristic parameters and automatically distinguish signals with strong interference from the rest. The wavelet threshold (WT) de-noising method is used only to suppress the identified strong interference in selected signal sections. The technique is validated through signal samples with known interference, before being applied to a set of field measured MT/Audio Magnetotelluric (AMT) data. Compared with the conventional de-noising strategy that blindly applies the filter to the overall dataset, the proposed method can automatically identify and purposefully suppress the intermittent interference in the MT/AMT signal. The resulted apparent resistivity-phase curve is more continuous and smooth, and the slow-change trend in the low-frequency range is more precisely reserved. Moreover, the characteristic of the target-filtered MT/AMT signal is close to the essential characteristic of the natural field, and the result more accurately reflects the inherent electrical structure information of the measured site.

  12. Analysis of task-evoked systemic interference in fNIRS measurements: insights from fMRI.

    PubMed

    Erdoğan, Sinem B; Yücel, Meryem A; Akın, Ata

    2014-02-15

    Functional near infrared spectroscopy (fNIRS) is a promising method for monitoring cerebral hemodynamics with a wide range of clinical applications. fNIRS signals are contaminated with systemic physiological interferences from both the brain and superficial tissues, resulting in a poor estimation of the task related neuronal activation. In this study, we use the anatomical resolution of functional magnetic resonance imaging (fMRI) to extract scalp and brain vascular signals separately and construct an optically weighted spatial average of the fMRI blood oxygen level-dependent (BOLD) signal for characterizing the scalp signal contribution to fNIRS measurements. We introduce an extended superficial signal regression (ESSR) method for canceling physiology-based systemic interference where the effects of cerebral and superficial systemic interference are treated separately. We apply and validate our method on the optically weighted BOLD signals, which are obtained by projecting the fMRI image onto optical measurement space by use of the optical forward problem. The performance of ESSR method in removing physiological artifacts is compared to i) a global signal regression (GSR) method and ii) a superficial signal regression (SSR) method. The retrieved signals from each method are compared with the neural signals that represent the 'ground truth' brain activation cleaned from cerebral systemic fluctuations. We report significant improvements in the recovery of task induced neural activation with the ESSR method when compared to the other two methods as reflected in the Pearson R(2) coefficient and mean square error (MSE) metrics (two tailed paired t-tests, p<0.05). The signal quality is enhanced most when ESSR method is applied with higher spatial localization, lower inter-trial variability, a clear canonical waveform and higher contrast-to-noise (CNR) improvement (60%). Our findings suggest that, during a cognitive task i) superficial scalp signal contribution to fNIRS signals varies significantly among different regions on the forehead and ii) using an average scalp measurement together with a local measure of superficial hemodynamics better accounts for the systemic interference inherent in the brain as well as superficial scalp tissue. We conclude that maximizing the overlap between the optical pathlength of superficial and deeper penetration measurements is of crucial importance for accurate recovery of the evoked hemodynamic response in fNIRS recordings. © 2013 Elsevier Inc. All rights reserved.

  13. Quantitative validation of a nonlinear histology-MRI coregistration method using Generalized Q-sampling Imaging in complex human cortical white matter

    PubMed Central

    Gangolli, Mihika; Holleran, Laurena; Kim, Joong Hee; Stein, Thor D.; Alvarez, Victor; McKee, Ann C.; Brody, David L.

    2017-01-01

    Advanced diffusion MRI methods have recently been proposed for detection of pathologies such as traumatic axonal injury and chronic traumatic encephalopathy which commonly affect complex cortical brain regions. However, radiological-pathological correlations in human brain tissue that detail the relationship between the multi-component diffusion signal and underlying pathology are lacking. We present a nonlinear voxel based two dimensional coregistration method that is useful for matching diffusion signals to quantitative metrics of high resolution histological images. When validated in ex vivo human cortical tissue at a 250 × 250 × 500 micron spatial resolution, the method proved robust in correlations between generalized q-sampling imaging and histologically based white matter fiber orientations, with r = 0.94 for the primary fiber direction and r = 0.88 for secondary fiber direction in each voxel. Importantly, however, the correlation was substantially worse with reduced spatial resolution or with fiber orientations derived using a diffusion tensor model. Furthermore, we have detailed a quantitative histological metric of white matter fiber integrity termed power coherence capable of distinguishing between architecturally complex but intact white matter from disrupted white matter regions. These methods may allow for more sensitive and specific radiological-pathological correlations of neurodegenerative diseases affecting complex gray and white matter. PMID:28365421

  14. Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach

    PubMed Central

    Xu, Nan; Spreng, R. Nathan; Doerschuk, Peter C.

    2017-01-01

    Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the “common driver” problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain. PMID:28559793

  15. Randomized Hough transform filter for echo extraction in DLR

    NASA Astrophysics Data System (ADS)

    Liu, Tong; Chen, Hao; Shen, Ming; Gao, Pengqi; Zhao, You

    2016-11-01

    The signal-to-noise ratio (SNR) of debris laser ranging (DLR) data is extremely low, and the valid returns in the DLR range residuals are distributed on a curve in a long observation time. Therefore, it is hard to extract the signals from noise in the Observed-minus-Calculated (O-C) residuals with low SNR. In order to autonomously extract the valid returns, we propose a new algorithm based on randomized Hough transform (RHT). We firstly pre-process the data using histogram method to find the zonal area that contains all the possible signals to reduce large amount of noise. Then the data is processed with RHT algorithm to find the curve that the signal points are distributed on. A new parameter update strategy is introduced in the RHT to get the best parameters. We also analyze the values of the parameters in the algorithm. We test our algorithm on the 10 Hz repetition rate DLR data from Yunnan Observatory and 100 Hz repetition rate DLR data from Graz SLR station. For 10 Hz DLR data with relative larger and similar range gate, we can process it in real time and extract all the signals autonomously with a few false readings. For 100 Hz DLR data with longer observation time, we autonomously post-process DLR data of 0.9%, 2.7%, 8% and 33% return rate with high reliability. The extracted points contain almost all signals and a low percentage of noise. Additional noise is added to 10 Hz DLR data to get lower return rate data. The valid returns can also be well extracted for DLR data with 0.18% and 0.1% return rate.

  16. Application of wavelet and Fuorier transforms as powerful alternatives for derivative spectrophotometry in analysis of binary mixtures: A comparative study

    NASA Astrophysics Data System (ADS)

    Hassan, Said A.; Abdel-Gawad, Sherif A.

    2018-02-01

    Two signal processing methods, namely, Continuous Wavelet Transform (CWT) and the second was Discrete Fourier Transform (DFT) were introduced as alternatives to the classical Derivative Spectrophotometry (DS) in analysis of binary mixtures. To show the advantages of these methods, a comparative study was performed on a binary mixture of Naltrexone (NTX) and Bupropion (BUP). The methods were compared by analyzing laboratory prepared mixtures of the two drugs. By comparing performance of the three methods, it was proved that CWT and DFT methods are more efficient and advantageous in analysis of mixtures with overlapped spectra than DS. The three signal processing methods were adopted for the quantification of NTX and BUP in pure and tablet forms. The adopted methods were validated according to the ICH guideline where accuracy, precision and specificity were found to be within appropriate limits.

  17. Parametric study of statistical bias in laser Doppler velocimetry

    NASA Technical Reports Server (NTRS)

    Gould, Richard D.; Stevenson, Warren H.; Thompson, H. Doyle

    1989-01-01

    Analytical studies have often assumed that LDV velocity bias depends on turbulence intensity in conjunction with one or more characteristic time scales, such as the time between validated signals, the time between data samples, and the integral turbulence time-scale. These parameters are presently varied independently, in an effort to quantify the biasing effect. Neither of the post facto correction methods employed is entirely accurate. The mean velocity bias error is found to be nearly independent of data validation rate.

  18. Precision Antenna Measurement System (PAMS) Engineering Services

    DTIC Science & Technology

    1978-04-01

    8217) = receiving antenna gain for vertical polarization. The total direct signal power is Following Beck /narn and Spizzachino , the specular component...method may be valid for the problem. Very often, however, the physical optics 92 approach baaed on a solution of the wave equation will have to

  19. Suppression of Stimulus Artifact Contaminating Electrically Evoked Electromyography

    PubMed Central

    Liu, Jie; Li, Sheng; Li, Xiaoyan; Klein, Cliff; Rymer, William Z.; Zhou, Ping

    2013-01-01

    Background Electrical stimulation of muscle or nerve is a very useful technique for understanding of muscle activity and its pathological changes for both diagnostic and therapeutic purposes. During electrical stimulation of a muscle, the recorded M wave is often contaminated by a stimulus artifact. The stimulus artifact must be removed for appropriate analysis and interpretation of M waves. Objectives The objective of this study was to develop a novel software based method to remove stimulus artifacts contaminating or superimposing with electrically evoked surface electromyography (EMG) or M wave signals. Methods The multiple stage method uses a series of signal processing techniques, including highlighting and detection of stimulus artifacts using the Savitzky-Golay filtering, estimation of the artifact contaminated region with the Otsu thresholding, and reconstruction of such region using signal interpolation and smoothing. The developed method was tested using M wave signals recorded from biceps brachii muscles by a linear surface electrode array. To evaluate the performance, a series of semi-synthetic signals were constructed from clean M wave and stimulus artifact recordings with different degrees of overlap between them. Results The effectiveness of the developed method was quantified by a significant increase in correlation coefficient and a significant decrease in root mean square error between the clean M wave and the reconstructed M wave, compared with those between the clean M wave and the originally contaminated signal. The validity of the developed method was also demonstrated when tested on each channel’s M wave recording using the linear electrode array. Conclusions The developed method can suppress stimulus artifacts contaminating M wave recordings. PMID:24419021

  20. Minimal Polynomial Method for Estimating Parameters of Signals Received by an Antenna Array

    NASA Astrophysics Data System (ADS)

    Ermolaev, V. T.; Flaksman, A. G.; Elokhin, A. V.; Kuptsov, V. V.

    2018-01-01

    The effectiveness of the projection minimal polynomial method for solving the problem of determining the number of sources of signals acting on an antenna array (AA) with an arbitrary configuration and their angular directions has been studied. The method proposes estimating the degree of the minimal polynomial of the correlation matrix (CM) of the input process in the AA on the basis of a statistically validated root-mean-square criterion. Special attention is paid to the case of the ultrashort sample of the input process when the number of samples is considerably smaller than the number of AA elements, which is important for multielement AAs. It is shown that the proposed method is more effective in this case than methods based on the AIC (Akaike's Information Criterion) or minimum description length (MDL) criterion.

  1. A hybrid method based on Band Pass Filter and Correlation Algorithm to improve debris sensor capacity

    NASA Astrophysics Data System (ADS)

    Hong, Wei; Wang, Shaoping; Liu, Haokuo; Tomovic, Mileta M.; Chao, Zhang

    2017-01-01

    The inductive debris detection is an effective method for monitoring mechanical wear, and could be used to prevent serious accidents. However, debris detection during early phase of mechanical wear, when small debris (<100 um) is generated, requires that the sensor has high sensitivity with respect to background noise. In order to detect smaller debris by existing sensors, this paper presents a hybrid method which combines Band Pass Filter and Correlation Algorithm to improve sensor signal-to-noise ratio (SNR). The simulation results indicate that the SNR will be improved at least 2.67 times after signal processing. In other words, this method ensures debris identification when the sensor's SNR is bigger than -3 dB. Thus, smaller debris will be detected in the same SNR. Finally, effectiveness of the proposed method is experimentally validated.

  2. Sample entropy analysis for the estimating depth of anaesthesia through human EEG signal at different levels of unconsciousness during surgeries

    PubMed Central

    Fan, Shou-Zen; Abbod, Maysam F.

    2018-01-01

    Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients’ anaesthetic level during surgeries. PMID:29844970

  3. Form-Deprivation Myopia in Chick Induces Limited Changes in Retinal Gene Expression

    PubMed Central

    McGlinn, Alice M.; Baldwin, Donald A.; Tobias, John W.; Budak, Murat T.; Khurana, Tejvir S.; Stone, Richard A.

    2007-01-01

    Purpose Evidence has implicated the retina as a principal controller of refractive development. In the present study, the retinal transcriptome was analyzed to identify alterations in gene expression and potential signaling pathways involved in form-deprivation myopia of the chick. Methods One-week-old white Leghorn chicks wore a unilateral image-degrading goggle for 6 hours or 3 days (n = 6 at each time). Total RNA from the retina/(retinal pigment epithelium) was used for expression profiling with chicken gene microarrays (Chicken GeneChips; Affymetrix, Santa Clara, CA). To identify gene expression level differences between goggled and contralateral nongoggled eyes, normalized microarray signal intensities were analyzed by the significance analysis of microarrays (SAM) approach. Differentially expressed genes were validated by real-time quantitative reverse transcription–polymerase chain reaction (qPCR) in independent biological replicates. Results Small changes were detected in differentially expressed genes in form-deprived eyes. In chickens that had 6 hours of goggle wear, downregulation of bone morphogenetic protein 2 and connective tissue growth factor was validated. In those with 3 days of goggle wear, downregulation of bone morphogenetic protein 2, vasoactive intestinal peptide, preopro-urotensin II–related peptide and mitogen-activated protein kinase phosphatase 2 was validated, and upregulation of endothelin receptor type B and interleukin-18 was validated. Conclusions Form-deprivation myopia, in its early stages, is associated with only minimal changes in retinal gene expression at the level of the transcriptome. While the list of validated genes is short, each merits further study for potential involvement in the signaling cascade mediating myopia development. PMID:17652709

  4. Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders.

    PubMed

    Subasi, Abdulhamit

    2013-06-01

    Support vector machine (SVM) is an extensively used machine learning method with many biomedical signal classification applications. In this study, a novel PSO-SVM model has been proposed that hybridized the particle swarm optimization (PSO) and SVM to improve the EMG signal classification accuracy. This optimization mechanism involves kernel parameter setting in the SVM training procedure, which significantly influences the classification accuracy. The experiments were conducted on the basis of EMG signal to classify into normal, neurogenic or myopathic. In the proposed method the EMG signals were decomposed into the frequency sub-bands using discrete wavelet transform (DWT) and a set of statistical features were extracted from these sub-bands to represent the distribution of wavelet coefficients. The obtained results obviously validate the superiority of the SVM method compared to conventional machine learning methods, and suggest that further significant enhancements in terms of classification accuracy can be achieved by the proposed PSO-SVM classification system. The PSO-SVM yielded an overall accuracy of 97.41% on 1200 EMG signals selected from 27 subject records against 96.75%, 95.17% and 94.08% for the SVM, the k-NN and the RBF classifiers, respectively. PSO-SVM is developed as an efficient tool so that various SVMs can be used conveniently as the core of PSO-SVM for diagnosis of neuromuscular disorders. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Swarm autonomic agents with self-destruct capability

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2009-01-01

    Systems, methods and apparatus are provided through which in some embodiments an autonomic entity manages a system by generating one or more stay alive signals based on the functioning status and operating state of the system. In some embodiments, an evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy. The evolvable neural interface receives and generates heartbeat monitor signals and pulse monitor signals that are used to generate a stay alive signal that is used to manage the operations of the synthetic neural system. In another embodiment an asynchronous Alice signal (Autonomic license) requiring valid credentials of an anonymous autonomous agent is initiated. An unsatisfactory Alice exchange may lead to self-destruction of the anonymous autonomous agent for self-protection.

  6. Swarm autonomic agents with self-destruct capability

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which in some embodiments an autonomic entity manages a system by generating one or more stay alive signals based on the functioning status and operating state of the system. In some embodiments, an evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy. The evolvable neural interface receives and generates heartbeat monitor signals and pulse monitor signals that are used to generate a stay alive signal that is used to manage the operations of the synthetic neural system. In another embodiment an asynchronous Alice signal (Autonomic license) requiring valid credentials of an anonymous autonomous agent is initiated. An unsatisfactory Alice exchange may lead to self-destruction of the anonymous autonomous agent for self-protection.

  7. AVIRIS calibration using the cloud-shadow method

    NASA Technical Reports Server (NTRS)

    Carder, K. L.; Reinersman, P.; Chen, R. F.

    1993-01-01

    More than 90 percent of the signal at an ocean-viewing, satellite sensor is due to the atmosphere, so a 5 percent sensor-calibration error viewing a target that contributes but 10 percent of the signal received at the sensor may result in a target-reflectance error of more than 50 percent. Since prelaunch calibration accuracies of 5 percent are typical of space-sensor requirements, recalibration of the sensor using ground-base methods is required for low-signal target. Known target reflectance or water-leaving radiance spectra and atmospheric correction parameters are required. In this article we describe an atmospheric-correction method that uses cloud shadowed pixels in combination with pixels in a neighborhood region of similar optical properties to remove atmospheric effects from ocean scenes. These neighboring pixels can then be used as known reflectance targets for validation of the sensor calibration and atmospheric correction. The method uses the difference between water-leaving radiance values for these two regions. This allows nearly identical optical contributions to the two signals (e.g., path radiance and Fresnel-reflected skylight) to be removed, leaving mostly solar photons backscattered from beneath the sea to dominate the residual signal. Normalization by incident solar irradiance reaching the sea surface provides the remote-sensing reflectance of the ocean at the location of the neighbor region.

  8. A novel non-contact radar sensor for affective and interactive analysis.

    PubMed

    Lin, Hong-Dun; Lee, Yen-Shien; Shih, Hsiang-Lan; Chuang, Bor-Nian

    2013-01-01

    Currently, many physiological signal sensing techniques have been applied for affective analysis in Human-Computer Interaction applications. Most known maturely developed sensing methods (EEG/ECG/EMG/Temperature/BP etc. al.) replied on contact way to obtain desired physiological information for further data analysis. However, those methods might cause some inconvenient and uncomfortable problems, and not easy to be used for affective analysis in interactive performing. To improve this issue, a novel technology based on low power radar technology (Nanosecond Pulse Near-field Sensing, NPNS) with 300 MHz radio-frequency was proposed to detect humans' pulse signal by the non-contact way for heartbeat signal extraction. In this paper, a modified nonlinear HRV calculated algorithm was also developed and applied on analyzing affective status using extracted Peak-to-Peak Interval (PPI) information from detected pulse signal. The proposed new affective analysis method is designed to continuously collect the humans' physiological signal, and validated in a preliminary experiment with sound, light and motion interactive performance. As a result, the mean bias between PPI (from NPNS) and RRI (from ECG) shows less than 1ms, and the correlation is over than 0.88, respectively.

  9. Quantitative evaluation of deep and shallow tissue layers' contribution to fNIRS signal using multi-distance optodes and independent component analysis.

    PubMed

    Funane, Tsukasa; Atsumori, Hirokazu; Katura, Takusige; Obata, Akiko N; Sato, Hiroki; Tanikawa, Yukari; Okada, Eiji; Kiguchi, Masashi

    2014-01-15

    To quantify the effect of absorption changes in the deep tissue (cerebral) and shallow tissue (scalp, skin) layers on functional near-infrared spectroscopy (fNIRS) signals, a method using multi-distance (MD) optodes and independent component analysis (ICA), referred to as the MD-ICA method, is proposed. In previous studies, when the signal from the shallow tissue layer (shallow signal) needs to be eliminated, it was often assumed that the shallow signal had no correlation with the signal from the deep tissue layer (deep signal). In this study, no relationship between the waveforms of deep and shallow signals is assumed, and instead, it is assumed that both signals are linear combinations of multiple signal sources, which allows the inclusion of a "shared component" (such as systemic signals) that is contained in both layers. The method also assumes that the partial optical path length of the shallow layer does not change, whereas that of the deep layer linearly increases along with the increase of the source-detector (S-D) distance. Deep- and shallow-layer contribution ratios of each independent component (IC) are calculated using the dependence of the weight of each IC on the S-D distance. Reconstruction of deep- and shallow-layer signals are performed by the sum of ICs weighted by the deep and shallow contribution ratio. Experimental validation of the principle of this technique was conducted using a dynamic phantom with two absorbing layers. Results showed that our method is effective for evaluating deep-layer contributions even if there are high correlations between deep and shallow signals. Next, we applied the method to fNIRS signals obtained on a human head with 5-, 15-, and 30-mm S-D distances during a verbal fluency task, a verbal working memory task (prefrontal area), a finger tapping task (motor area), and a tetrametric visual checker-board task (occipital area) and then estimated the deep-layer contribution ratio. To evaluate the signal separation performance of our method, we used the correlation coefficients of a laser-Doppler flowmetry (LDF) signal and a nearest 5-mm S-D distance channel signal with the shallow signal. We demonstrated that the shallow signals have a higher temporal correlation with the LDF signals and with the 5-mm S-D distance channel than the deep signals. These results show the MD-ICA method can discriminate between deep and shallow signals. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP).

    PubMed

    Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong

    2017-01-01

    SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift-Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent "Bit 0," "Bit 1" and "Bit 2" respectively. Different to common BFSK in digital communication, "Bit 0" and "Bit 1" composited the unique identifier of stimuli in binary bit stream form, while "Bit 2" indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2 n -1 ( n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations.

  11. Computing moment to moment BOLD activation for real-time neurofeedback

    PubMed Central

    Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.

    2013-01-01

    Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350

  12. Non-invasive Fetal ECG Signal Quality Assessment for Multichannel Heart Rate Estimation.

    PubMed

    Andreotti, Fernando; Graser, Felix; Malberg, Hagen; Zaunseder, Sebastian

    2017-12-01

    The noninvasive fetal ECG (NI-FECG) from abdominal recordings offers novel prospects for prenatal monitoring. However, NI-FECG signals are corrupted by various nonstationary noise sources, making the processing of abdominal recordings a challenging task. In this paper, we present an online approach that dynamically assess the quality of NI-FECG to improve fetal heart rate (FHR) estimation. Using a naive Bayes classifier, state-of-the-art and novel signal quality indices (SQIs), and an existing adaptive Kalman filter, FHR estimation was improved. For the purpose of training and validating the proposed methods, a large annotated private clinical dataset was used. The suggested classification scheme demonstrated an accuracy of Krippendorff's alpha in determining the overall quality of NI-FECG signals. The proposed Kalman filter outperformed alternative methods for FHR estimation achieving accuracy. The proposed algorithm was able to reliably reflect changes of signal quality and can be used in improving FHR estimation. NI-ECG signal quality estimation and multichannel information fusion are largely unexplored topics. Based on previous works, multichannel FHR estimation is a field that could strongly benefit from such methods. The developed SQI algorithms as well as resulting classifier were made available under a GNU GPL open-source license and contributed to the FECGSYN toolbox.

  13. Simultaneous quantitative analysis of olmesartan, amlodipine and hydrochlorothiazide in their combined dosage form utilizing classical and alternating least squares based chemometric methods.

    PubMed

    Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S

    2016-03-01

    Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.

  14. Independent component analysis for the extraction of reliable protein signal profiles from MALDI-TOF mass spectra.

    PubMed

    Mantini, Dante; Petrucci, Francesca; Del Boccio, Piero; Pieragostino, Damiana; Di Nicola, Marta; Lugaresi, Alessandra; Federici, Giorgio; Sacchetta, Paolo; Di Ilio, Carmine; Urbani, Andrea

    2008-01-01

    Independent component analysis (ICA) is a signal processing technique that can be utilized to recover independent signals from a set of their linear mixtures. We propose ICA for the analysis of signals obtained from large proteomics investigations such as clinical multi-subject studies based on MALDI-TOF MS profiling. The method is validated on simulated and experimental data for demonstrating its capability of correctly extracting protein profiles from MALDI-TOF mass spectra. The comparison on peak detection with an open-source and two commercial methods shows its superior reliability in reducing the false discovery rate of protein peak masses. Moreover, the integration of ICA and statistical tests for detecting the differences in peak intensities between experimental groups allows to identify protein peaks that could be indicators of a diseased state. This data-driven approach demonstrates to be a promising tool for biomarker-discovery studies based on MALDI-TOF MS technology. The MATLAB implementation of the method described in the article and both simulated and experimental data are freely available at http://www.unich.it/proteomica/bioinf/.

  15. Validation of attenuation, beam blockage, and calibration estimation methods using two dual polarization X band weather radars

    NASA Astrophysics Data System (ADS)

    Diederich, M.; Ryzhkov, A.; Simmer, C.; Mühlbauer, K.

    2011-12-01

    The amplitude a of radar wave reflected by meteorological targets can be misjudged due to several factors. At X band wavelength, attenuation of the radar beam by hydro meteors reduces the signal strength enough to be a significant source of error for quantitative precipitation estimation. Depending on the surrounding orography, the radar beam may be partially blocked when scanning at low elevation angles, and the knowledge of the exact amount of signal loss through beam blockage becomes necessary. The phase shift between the radar signals at horizontal and vertical polarizations is affected by the hydrometeors that the beam travels through, but remains unaffected by variations in signal strength. This has allowed for several ways of compensating for the attenuation of the signal, and for consistency checks between these variables. In this study, we make use of several weather radars and gauge network measuring in the same area to examine the effectiveness of several methods of attenuation and beam blockage corrections. The methods include consistency checks of radar reflectivity and specific differential phase, calculation of beam blockage using a topography map, estimating attenuation using differential propagation phase, and the ZPHI method proposed by Testud et al. in 2000. Results show the high effectiveness of differential phase in estimating attenuation, and potential of the ZPHI method to compensate attenuation, beam blockage, and calibration errors.

  16. Objective Measure of Nasal Air Emission Using Nasal Accelerometry

    ERIC Educational Resources Information Center

    Cler, Meredith J.; Lien, Yu-An, S.; Braden, Maia N.; Mittleman, Talia; Downing, Kerri; Stepp, Cara, E.

    2016-01-01

    Purpose: This article describes the development and initial validation of an objective measure of nasal air emission (NAE) using nasal accelerometry. Method: Nasal acceleration and nasal airflow signals were simultaneously recorded while an expert speech language pathologist modeled NAEs at a variety of severity levels. In addition, microphone and…

  17. Multi-parameter Observations and Validation of Pre-earthquake Atmospheric Signals

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Mogi, T.; Kafatos, M.

    2014-12-01

    We are presenting the latest development in multi-sensors observations of short-term pre-earthquake phenomena preceding major earthquakes. We are exploring the potential of pre-seismic atmospheric and ionospheric signals to alert for large earthquakes. To achieve this, we start validating anomalous ionospheric /atmospheric signals in retrospective and prospective modes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (Satellite thermal infrared radiation (OLR), electron concentration in the ionosphere (GPS/TEC), VHF-bands radio waves, radon/ion activities, air temperature and seismicity patterns) that were found to be associated with earthquakes. The science rationale for multidisciplinary analysis is based on concept Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) [Pulinets and Ouzounov, 2011], which explains the synergy of different geospace processes and anomalous variations, usually named short-term pre-earthquake anomalies. Our validation processes consist in two steps: (1) A continuous retrospective analysis preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009 The retrospective tests (100+ major earthquakes, M>5.9, Taiwan and Japan) show OLR anomalous behavior before all of these events with no false negatives. False alarm ratio for false positives is less then 25%. (2) Prospective testing using multiple parameters with potential for M5.5+ events. The initial testing shows systematic appearance of atmospheric anomalies in advance (days) to the M5.5+ events for Taiwan and Japan (Honshu and Hokkaido areas). Our initial prospective results suggest that our approach show a systematic appearance of atmospheric anomalies, one to several days prior to the largest earthquakes That feature could be further studied and tested for advancing the multi-sensors detection of pre-earthquake atmospheric signals.

  18. Verification and calibration of laser Doppler flowmetry (LDF) prototype for measurement of microcirculation

    NASA Astrophysics Data System (ADS)

    Li, Yung-Hui; Hu, Chia-Ming; Tsai, Ming-Lun

    2017-10-01

    Laser Doppler Flowmetry (LDF), a non-invasive microcirculation measurement equipment, is designed to be used in measuring microcirculation and perfusion in the skin. LDF is very applicable to healthcare. However, the cost of commercial LDF prevents its prevalence and popularity. In this paper, continuing previous researches, a LDF prototype was built from the combination of the off-the-shelf electronic components. The raw signals acquired from the proposed LDF prototype is validated to be relevant to the microcirculation flux. Furthermore, we would like to verify the consistency between the signals measured from both model, and find an implicit transformation rule to transform the LDF prototype signals. For the purpose of verification and calibration of the LDF prototype signal feature, we first collected a parallel database consisting of flux signals measured by commercial and prototype LDF at the same time. Second, we extract signals with specific frequency of normalized signals as features and use these features to establish a model to allow us to map signals measured by LDF prototype to the commercial model. The result of the experiment showed that after we used the linear regression models to calibrate physiological feature, the correlation coefficient reached nearly 0.9999, which is close to a perfect positive correlation. The overall evaluation results showed that the proposed method can verify and ensure the validity of the LDF prototype. Through the proposed transformation, the flux signals measured by the proposed LDF prototype can successfully be transformed to its parallel form as if it is measured by commercial LDF.

  19. Real-Time Rotational Activity Detection in Atrial Fibrillation

    PubMed Central

    Ríos-Muñoz, Gonzalo R.; Arenal, Ángel; Artés-Rodríguez, Antonio

    2018-01-01

    Rotational activations, or spiral waves, are one of the proposed mechanisms for atrial fibrillation (AF) maintenance. We present a system for assessing the presence of rotational activity from intracardiac electrograms (EGMs). Our system is able to operate in real-time with multi-electrode catheters of different topologies in contact with the atrial wall, and it is based on new local activation time (LAT) estimation and rotational activity detection methods. The EGM LAT estimation method is based on the identification of the highest sustained negative slope of unipolar signals. The method is implemented as a linear filter whose output is interpolated on a regular grid to match any catheter topology. Its operation is illustrated on selected signals and compared to the classical Hilbert-Transform-based phase analysis. After the estimation of the LAT on the regular grid, the detection of rotational activity in the atrium is done by a novel method based on the optical flow of the wavefront dynamics, and a rotation pattern match. The methods have been validated using in silico and real AF signals. PMID:29593566

  20. BACOM2.0 facilitates absolute normalization and quantification of somatic copy number alterations in heterogeneous tumor

    NASA Astrophysics Data System (ADS)

    Fu, Yi; Yu, Guoqiang; Levine, Douglas A.; Wang, Niya; Shih, Ie-Ming; Zhang, Zhen; Clarke, Robert; Wang, Yue

    2015-09-01

    Most published copy number datasets on solid tumors were obtained from specimens comprised of mixed cell populations, for which the varying tumor-stroma proportions are unknown or unreported. The inability to correct for signal mixing represents a major limitation on the use of these datasets for subsequent analyses, such as discerning deletion types or detecting driver aberrations. We describe the BACOM2.0 method with enhanced accuracy and functionality to normalize copy number signals, detect deletion types, estimate tumor purity, quantify true copy numbers, and calculate average-ploidy value. While BACOM has been validated and used with promising results, subsequent BACOM analysis of the TCGA ovarian cancer dataset found that the estimated average tumor purity was lower than expected. In this report, we first show that this lowered estimate of tumor purity is the combined result of imprecise signal normalization and parameter estimation. Then, we describe effective allele-specific absolute normalization and quantification methods that can enhance BACOM applications in many biological contexts while in the presence of various confounders. Finally, we discuss the advantages of BACOM in relation to alternative approaches. Here we detail this revised computational approach, BACOM2.0, and validate its performance in real and simulated datasets.

  1. A novel method for the simultaneous measurement of temperature and strain using a three-wire connection

    NASA Astrophysics Data System (ADS)

    Cappa, Paolo; Marinozzi, Franco; Sciuto, Salvatore Andrea

    2001-04-01

    A novel methodology to simultaneously measure strain and temperature by means of an electrical resistance strain gauge powered by an ac signal and connected to a strain indicator by means of thermocouple wires is proposed. The experimental validation of the viability of this method is conducted by means of a purely electrical simulation of both strain and temperature signals, respectively from -2000 to 2000 µm m-1 and -250 to 230 °C. The results obtained showed that strain measurement is affected by an error always less than ±2 µm m-1 for the whole range of simulated strains, while the error in temperature evaluation is always less than 0.6 °C. The effect of cross-talk between the two signals was determined to be insignificant.

  2. Application of Petri net based analysis techniques to signal transduction pathways

    PubMed Central

    Sackmann, Andrea; Heiner, Monika; Koch, Ina

    2006-01-01

    Background Signal transduction pathways are usually modelled using classical quantitative methods, which are based on ordinary differential equations (ODEs). However, some difficulties are inherent in this approach. On the one hand, the kinetic parameters involved are often unknown and have to be estimated. With increasing size and complexity of signal transduction pathways, the estimation of missing kinetic data is not possible. On the other hand, ODEs based models do not support any explicit insights into possible (signal-) flows within the network. Moreover, a huge amount of qualitative data is available due to high-throughput techniques. In order to get information on the systems behaviour, qualitative analysis techniques have been developed. Applications of the known qualitative analysis methods concern mainly metabolic networks. Petri net theory provides a variety of established analysis techniques, which are also applicable to signal transduction models. In this context special properties have to be considered and new dedicated techniques have to be designed. Methods We apply Petri net theory to model and analyse signal transduction pathways first qualitatively before continuing with quantitative analyses. This paper demonstrates how to build systematically a discrete model, which reflects provably the qualitative biological behaviour without any knowledge of kinetic parameters. The mating pheromone response pathway in Saccharomyces cerevisiae serves as case study. Results We propose an approach for model validation of signal transduction pathways based on the network structure only. For this purpose, we introduce the new notion of feasible t-invariants, which represent minimal self-contained subnets being active under a given input situation. Each of these subnets stands for a signal flow in the system. We define maximal common transition sets (MCT-sets), which can be used for t-invariant examination and net decomposition into smallest biologically meaningful functional units. Conclusion The paper demonstrates how Petri net analysis techniques can promote a deeper understanding of signal transduction pathways. The new concepts of feasible t-invariants and MCT-sets have been proven to be useful for model validation and the interpretation of the biological system behaviour. Whereas MCT-sets provide a decomposition of the net into disjunctive subnets, feasible t-invariants describe subnets, which generally overlap. This work contributes to qualitative modelling and to the analysis of large biological networks by their fully automatic decomposition into biologically meaningful modules. PMID:17081284

  3. Parameter tuning method for dither compensation of a pneumatic proportional valve with friction

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Song, Yang; Huang, Leisheng; Fan, Wei

    2016-05-01

    In the practical application of pneumatic control devices, the nonlinearity of a pneumatic control valve become the main factor affecting the control effect, which comes mainly from the dynamic friction force. The dynamic friction inside the valve may cause hysteresis and a dead zone. In this paper, a dither compensation mechanism is proposed to reduce negative effects on the basis of analyzing the mechanism of friction force. The specific dither signal (using a sinusoidal signal) was superimposed on the control signal of the valve. Based on the relationship between the parameters of the dither signal and the inherent characteristics of the proportional servo valve, a parameter tuning method was proposed, which uses a displacement sensor to measure the maximum static friction inside the valve. According to the experimental results, the proper amplitude ranges are determined for different pressures. In order to get the optimal parameters of the dither signal, some dither compensation experiments have been carried out on different signal amplitude and gas pressure conditions. Optimal parameters are determined under two kinds of pressure conditions. Using tuning parameters the valve spool displacement experiment has been taken. From the experiment results, hysteresis of the proportional servo valve is significantly reduced. And through simulation and experiments, the cut-off frequency of the proportional valve has also been widened. Therefore after adding the dither signal, the static and dynamic characteristics of the proportional valve are both improved to a certain degree. This research proposes a parameter tuning method of dither signal, and the validity of the method is verified experimentally.

  4. Regularized non-stationary morphological reconstruction algorithm for weak signal detection in microseismic monitoring: methodology

    NASA Astrophysics Data System (ADS)

    Huang, Weilin; Wang, Runqiu; Chen, Yangkang

    2018-05-01

    Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.

  5. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging

    PubMed Central

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  6. Prospective Validation of Pre-earthquake Atmospheric Signals and Their Potential for Short–term Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Lee, Lou; Liu, Tiger; Kafatos, Menas

    2015-04-01

    We are presenting the latest development in multi-sensors observations of short-term pre-earthquake phenomena preceding major earthquakes. Our challenge question is: "Whether such pre-earthquake atmospheric/ionospheric signals are significant and could be useful for early warning of large earthquakes?" To check the predictive potential of atmospheric pre-earthquake signals we have started to validate anomalous ionospheric / atmospheric signals in retrospective and prospective modes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (Satellite thermal infrared radiation (STIR), electron concentration in the ionosphere (GPS/TEC), radon/ion activities, air temperature and seismicity patterns) that were found to be associated with earthquakes. The science rationale for multidisciplinary analysis is based on concept Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) [Pulinets and Ouzounov, 2011], which explains the synergy of different geospace processes and anomalous variations, usually named short-term pre-earthquake anomalies. Our validation processes consist in two steps: (1) A continuous retrospective analysis preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009 (2) Prospective testing of STIR anomalies with potential for M5.5+ events. The retrospective tests (100+ major earthquakes, M>5.9, Taiwan and Japan) show STIR anomalous behavior before all of these events with false negatives close to zero. False alarm ratio for false positives is less then 25%. The initial prospective testing for STIR shows systematic appearance of anomalies in advance (1-30 days) to the M5.5+ events for Taiwan, Kamchatka-Sakhalin (Russia) and Japan. Our initial prospective results suggest that our approach show a systematic appearance of atmospheric anomalies, one to several days prior to the largest earthquakes That feature could be further studied and tested for prospective early warnings based on the multi-sensors detection of pre-earthquake atmospheric signals.

  7. Calibration and validation of wearable monitors.

    PubMed

    Bassett, David R; Rowlands, Alex; Trost, Stewart G

    2012-01-01

    Wearable monitors are increasingly being used to objectively monitor physical activity in research studies within the field of exercise science. Calibration and validation of these devices are vital to obtaining accurate data. This article is aimed primarily at the physical activity measurement specialist, although the end user who is conducting studies with these devices also may benefit from knowing about this topic. Initially, wearable physical activity monitors should undergo unit calibration to ensure interinstrument reliability. The next step is to simultaneously collect both raw signal data (e.g., acceleration) from the wearable monitors and rates of energy expenditure, so that algorithms can be developed to convert the direct signals into energy expenditure. This process should use multiple wearable monitors and a large and diverse subject group and should include a wide range of physical activities commonly performed in daily life (from sedentary to vigorous). New methods of calibration now use "pattern recognition" approaches to train the algorithms on various activities, and they provide estimates of energy expenditure that are much better than those previously available with the single-regression approach. Once a method of predicting energy expenditure has been established, the next step is to examine its predictive accuracy by cross-validating it in other populations. In this article, we attempt to summarize the best practices for calibration and validation of wearable physical activity monitors. Finally, we conclude with some ideas for future research ideas that will move the field of physical activity measurement forward.

  8. Comparison of lifetime-based methods for 2D phosphor thermometry in high-temperature environment

    NASA Astrophysics Data System (ADS)

    Peng, Di; Liu, Yingzheng; Zhao, Xiaofeng; Kim, Kyung Chun

    2016-09-01

    This paper discusses the currently available techniques for 2D phosphor thermometry, and compares the performance of two lifetime-based methods: high-speed imaging and the dual-gate. High-speed imaging resolves luminescent decay with a fast frame rate, and has become a popular method for phosphor thermometry in recent years. But it has disadvantages such as high equipment cost and long data processing time, and it would fail at sufficiently high temperature due to a low signal-to-noise ratio and short lifetime. The dual-gate method only requires two images on the decay curve and therefore greatly reduces cost in hardware and processing time. A dual-gate method for phosphor thermometry has been developed and compared with the high-speed imaging method through both calibration and a jet impingement experiment. Measurement uncertainty has been evaluated for a temperature range of 473-833 K. The effects of several key factors on uncertainty have been discussed, including the luminescent signal level, the decay lifetime and temperature sensitivity. The results show that both methods are valid for 2D temperature sensing within the given range. The high-speed imaging method shows less uncertainty at low temperatures where the signal level and the lifetime are both sufficient, but its performance is degraded at higher temperatures due to a rapidly reduced signal and lifetime. For T  >  750 K, the dual-gate method outperforms the high-speed imaging method thanks to its superiority in signal-to-noise ratio and temperature sensitivity. The dual-gate method has great potential for applications in high-temperature environments where the high-speed imaging method is not applicable.

  9. Forskolin-free cAMP assay for Gi-coupled receptors.

    PubMed

    Gilissen, Julie; Geubelle, Pierre; Dupuis, Nadine; Laschet, Céline; Pirotte, Bernard; Hanson, Julien

    2015-12-01

    G protein-coupled receptors (GPCRs) represent the most successful receptor family for treating human diseases. Many are poorly characterized with few ligands reported or remain completely orphans. Therefore, there is a growing need for screening-compatible and sensitive assays. Measurement of intracellular cyclic AMP (cAMP) levels is a validated strategy for measuring GPCRs activation. However, agonist ligands for Gi-coupled receptors are difficult to track because inducers such as forskolin (FSK) must be used and are sources of variations and errors. We developed a method based on the GloSensor system, a kinetic assay that consists in a luciferase fused with cAMP binding domain. As a proof of concept, we selected the succinate receptor 1 (SUCNR1 or GPR91) which could be an attractive drug target. It has never been validated as such because very few ligands have been described. Following analyses of SUCNR1 signaling pathways, we show that the GloSensor system allows real time, FSK-free detection of an agonist effect. This FSK-free agonist signal was confirmed on other Gi-coupled receptors such as CXCR4. In a test screening on SUCNR1, we compared the results obtained with a FSK vs FSK-free protocol and were able to identify agonists with both methods but with fewer false positives when measuring the basal levels. In this report, we validate a cAMP-inducer free method for the detection of Gi-coupled receptors agonists compatible with high-throughput screening. This method will facilitate the study and screening of Gi-coupled receptors for active ligands. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Fault detection, isolation, and diagnosis of self-validating multifunctional sensors.

    PubMed

    Yang, Jing-Li; Chen, Yin-Sheng; Zhang, Li-Li; Sun, Zhen

    2016-06-01

    A novel fault detection, isolation, and diagnosis (FDID) strategy for self-validating multifunctional sensors is presented in this paper. The sparse non-negative matrix factorization-based method can effectively detect faults by using the squared prediction error (SPE) statistic, and the variables contribution plots based on SPE statistic can help to locate and isolate the faulty sensitive units. The complete ensemble empirical mode decomposition is employed to decompose the fault signals to a series of intrinsic mode functions (IMFs) and a residual. The sample entropy (SampEn)-weighted energy values of each IMFs and the residual are estimated to represent the characteristics of the fault signals. Multi-class support vector machine is introduced to identify the fault mode with the purpose of diagnosing status of the faulty sensitive units. The performance of the proposed strategy is compared with other fault detection strategies such as principal component analysis, independent component analysis, and fault diagnosis strategies such as empirical mode decomposition coupled with support vector machine. The proposed strategy is fully evaluated in a real self-validating multifunctional sensors experimental system, and the experimental results demonstrate that the proposed strategy provides an excellent solution to the FDID research topic of self-validating multifunctional sensors.

  11. Validity of SW982 synovial cell line for studying the drugs against rheumatoid arthritis in fluvastatin-induced apoptosis signaling model

    PubMed Central

    Chang, Jae-Ho; Lee, Kyu-Jae; Kim, Soo-Ki; Yoo, Dae-Hyun; Kang, Tae-Young

    2014-01-01

    Background & objectives: To study effects of drugs against rheumatoid arthritis (RA) synoviocytes or fibroblast like synoviocytes (FLS) are used. To overcome the drawbacks of using FLS, this study was conducted to show the validity of SW982 synovial cell line in RA study. Methods: 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay, Annexin V propidium iodide (PI) staining, mitochondrial membrane potential assay, Triton X-114 Phase partitioning, and immunolot for apoptosis signaling in SW982 human synovial cell line were performed. Results: Fluvastatin induced apoptosis in a dose- and time-dependent manner in TNFα -stimulated SW982 human synovial cells. A geranylgeranylpyrophosphate (GGPP) inhibitor, but not a farnesylpyrophosphate (FPP) inhibitor, induced apoptosis, and fluvastatin-induced apoptosis was associated with the translocation of isoprenylated RhoA and Rac1 proteins from the cell membrane to the cytosol. Fluvastatin-induced downstream apoptotic signals were associated with inhibition of the phosphoinositide 3-kinase (PI3K)/Akt pathway. Accordingly, 89 kDa apoptotic cleavage fragment of poly (ADP-ribose) polymerase (PARP) was detected. Interpretation & conclusions: Collectively, our data indicate that fluvastatin induces apoptotic cell death in TNFα-stimulated SW982 human synovial cells through the inactivation of the geranylgerenylated membrane fraction of RhoA and Rac1 proteins and the subsequent inhibition of the PI3K/Akt signaling pathway. This finding shows the validity of SW982 cell line for RA study. PMID:24604047

  12. Dual-responsive immunosensor that combines colorimetric recognition and electrochemical response for ultrasensitive detection of cancer biomarkers.

    PubMed

    Hong, Wooyoung; Lee, Sooyeon; Cho, Youngnam

    2016-12-15

    We developed a nanoroughened, biotin-doped polypyrrole immunosensor for the detection of tumor markers through dual-signal (electrochemical and colorimetric) channels, electrochemical and colorimetric, that demonstrates remarkable analytical performance. A rapid, one-step electric field-mediated method was employed to fabricate the immunosensor with nanoscale roughness by simply modulating the applied electrical potential. We demonstrated the successful detection of three tumor markers (CA125, CEA, and PSA) via the double enzymatic signal amplifications in the presence of a target antigen, ultimately leading to desired diagnostic accuracy and reliability. The addition of multiple horseradish peroxidase (HRP)- and antibody-labeled nanoparticles greatly amplified the signal and simplified the measurement of cancer biomarker proteins by sequentially magnifying electrochemical and colorimetric signals in a single platform. The two parallel assays performed using the proposed immunosensor have yielded highly consistent and reproducible results. Additionally, for the analysis of plasma samples in a clinical setting, the values obtained with our immunosensor were validated by correlating the results with those of a standard radioimmunoassay (RIA), which obtained very similar clinically valid responses. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Video-based respiration monitoring with automatic region of interest detection.

    PubMed

    Janssen, Rik; Wang, Wenjin; Moço, Andreia; de Haan, Gerard

    2016-01-01

    Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration monitoring method that automatically detects a respiratory region of interest (RoI) and signal using a camera. Based on the observation that respiration induced chest/abdomen motion is an independent motion system in a video, our basic idea is to exploit the intrinsic properties of respiration to find the respiratory RoI and extract the respiratory signal via motion factorization. We created a benchmark dataset containing 148 video sequences obtained on adults under challenging conditions and also neonates in the neonatal intensive care unit (NICU). The measurements obtained by the proposed video respiration monitoring (VRM) method are not significantly different from the reference methods (guided breathing or contact-based ECG; p-value  =  0.6), and explain more than 99% of the variance of the reference values with low limits of agreement (-2.67 to 2.81 bpm). VRM seems to provide a valid solution to ECG in confined motion scenarios, though precision may be reduced for neonates. More studies are needed to validate VRM under challenging recording conditions, including upper-body motion types.

  14. Improving quantitative gas chromatography-electron ionization mass spectrometry results using a modified ion source: demonstration for a pharmaceutical application.

    PubMed

    D'Autry, Ward; Wolfs, Kris; Hoogmartens, Jos; Adams, Erwin; Van Schepdael, Ann

    2011-07-01

    Gas chromatography-mass spectrometry is a well established analytical technique. However, mass spectrometers with electron ionization sources may suffer from signal drifts, hereby negatively influencing quantitative performance. To demonstrate this phenomenon for a real application, a static headspace-gas chromatography method in combination with electron ionization-quadrupole mass spectrometry was optimized for the determination of residual dichloromethane in coronary stent coatings. Validating the method, the quantitative performance of an original stainless steel ion source was compared to that of a modified ion source. Ion source modification included the application of a gold coating on the repeller and exit plate. Several validation aspects such as limit of detection, limit of quantification, linearity and precision were evaluated using both ion sources. It was found that, as expected, the stainless steel ion source suffered from signal drift. As a consequence, non-linearity and high RSD values for repeated analyses were obtained. An additional experiment was performed to check whether an internal standard compound would lead to better results. It was found that the signal drift patterns of the analyte and internal standard were different, consequently leading to high RSD values for the response factor. With the modified ion source however, a more stable signal was observed resulting in acceptable linearity and precision. Moreover, it was also found that sensitivity improved compared to the stainless steel ion source. Finally, the optimized method with the modified ion source was applied to determine residual dichloromethane in the coating of coronary stents. The solvent was detected but found to be below the limit of quantification. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Natural image classification driven by human brain activity

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Peng, Hanyang; Wang, Jinqiao; Tang, Ming; Xue, Rong; Zuo, Zhentao

    2016-03-01

    Natural image classification has been a hot topic in computer vision and pattern recognition research field. Since the performance of an image classification system can be improved by feature selection, many image feature selection methods have been developed. However, the existing supervised feature selection methods are typically driven by the class label information that are identical for different samples from the same class, ignoring with-in class image variability and therefore degrading the feature selection performance. In this study, we propose a novel feature selection method, driven by human brain activity signals collected using fMRI technique when human subjects were viewing natural images of different categories. The fMRI signals associated with subjects viewing different images encode the human perception of natural images, and therefore may capture image variability within- and cross- categories. We then select image features with the guidance of fMRI signals from brain regions with active response to image viewing. Particularly, bag of words features based on GIST descriptor are extracted from natural images for classification, and a sparse regression base feature selection method is adapted to select image features that can best predict fMRI signals. Finally, a classification model is built on the select image features to classify images without fMRI signals. The validation experiments for classifying images from 4 categories of two subjects have demonstrated that our method could achieve much better classification performance than the classifiers built on image feature selected by traditional feature selection methods.

  16. Validation of a Mobile Device for Acoustic Coordinated Reset Neuromodulation Tinnitus Therapy.

    PubMed

    Hauptmann, Christian; Wegener, Alexander; Poppe, Hendrik; Williams, Mark; Popelka, Gerald; Tass, Peter A

    2016-10-01

    Sound-based tinnitus intervention stimuli include broad-band noise signals with subjectively adjusted bandwidths used as maskers delivered by commercial devices or hearing aids, environmental sounds broadly described and delivered by both consumer devices and hearing aids, music recordings specifically modified and delivered in a variety of different ways, and other stimuli. Acoustic coordinated reset neuromodulation therapy for tinnitus reduction has unique and more stringent requirements compared to all other sound-based tinnitus interventions. These include precise characterization of tinnitus pitch and loudness, and effective provision of patient-controlled daily therapy signals at defined frequencies, levels, and durations outside of the clinic. The purpose of this study was to evaluate an approach to accommodate these requirements including evaluation of a mobile device, validation of an automated tinnitus pitch-matching algorithm and assessment of a patient's ability to control stimuli and collect repeated outcome measures. The experimental design involved direct laboratory measurements of the sound delivery capabilities of a mobile device, comparison of an automated, adaptive pitch-matching method to a traditional manual method and measures of a patient's ability to understand and manipulate a mobile device graphic user interface to both deliver the therapy signals and collect the outcome measures. This study consisted of 5 samples of a common mobile device for the laboratory measures and a total of 30 adult participants: 15 randomly selected normal-hearing participants with simulated tinnitus for validation of a tinnitus pitch-matching algorithm and 15 sequentially selected patients already undergoing tinnitus therapy for evaluation of patient usability. No tinnitus intervention(s) were specifically studied as a component of this study. Data collection involved laboratory measures of mobile devices, comparison of manual and automated adaptive tinnitus pitch-matching psychoacoustic procedures in the same participant analyzed for absolute differences (t test), variance differences (f test), and range comparisons, and assessment of patient usability including questionnaire measures and logs of patient observations. Mobile devices are able to reliably and accurately deliver the acoustic therapy signals. There was no difference in mean pitch matches (t test, p > 0.05) between an automated adaptive method compared to a traditional manual pitch-matching method. However, the variability of the automated pitch-matching method was much less (f test, p < 0.05) with twice as many matches within the predefined error range (±5%) compared to the manual pitch-matching method (80% versus 40%). After a short initial training, all participants were able to use the mobile device effectively and to perform the required tasks without further professional assistance. American Academy of Audiology

  17. Validations of calibration-free measurements of electron temperature using double-pass Thomson scattering diagnostics from theoretical and experimental aspects.

    PubMed

    Tojo, H; Yamada, I; Yasuhara, R; Ejiri, A; Hiratsuka, J; Togashi, H; Yatsuka, E; Hatae, T; Funaba, H; Hayashi, H; Takase, Y; Itami, K

    2016-09-01

    This paper evaluates the accuracy of electron temperature measurements and relative transmissivities of double-pass Thomson scattering diagnostics. The electron temperature (T e ) is obtained from the ratio of signals from a double-pass scattering system, then relative transmissivities are calculated from the measured T e and intensity of the signals. How accurate the values are depends on the electron temperature (T e ) and scattering angle (θ), and therefore the accuracy of the values was evaluated experimentally using the Large Helical Device (LHD) and the Tokyo spherical tokamak-2 (TST-2). Analyzing the data from the TST-2 indicates that a high T e and a large scattering angle (θ) yield accurate values. Indeed, the errors for scattering angle θ = 135° are approximately half of those for θ = 115°. The method of determining the T e in a wide T e range spanning over two orders of magnitude (0.01-1.5 keV) was validated using the experimental results of the LHD and TST-2. A simple method to provide relative transmissivities, which include inputs from collection optics, vacuum window, optical fibers, and polychromators, is also presented. The relative errors were less than approximately 10%. Numerical simulations also indicate that the T e measurements are valid under harsh radiation conditions. This method to obtain T e can be considered for the design of Thomson scattering systems where there is high-performance plasma that generates harsh radiation environments.

  18. Research and Analysis on the Localization of a 3-D Single Source in Lossy Medium Using Uniform Circular Array

    PubMed Central

    Xue, Bing; Qu, Xiaodong; Fang, Guangyou; Ji, Yicai

    2017-01-01

    In this paper, the methods and analysis for estimating the location of a three-dimensional (3-D) single source buried in lossy medium are presented with uniform circular array (UCA). The mathematical model of the signal in the lossy medium is proposed. Using information in the covariance matrix obtained by the sensors’ outputs, equations of the source location (azimuth angle, elevation angle, and range) are obtained. Then, the phase and amplitude of the covariance matrix function are used to process the source localization in the lossy medium. By analyzing the characteristics of the proposed methods and the multiple signal classification (MUSIC) method, the computational complexity and the valid scope of these methods are given. From the results, whether the loss is known or not, we can choose the best method for processing the issues (localization in lossless medium or lossy medium). PMID:28574467

  19. Real-time Tracking of DNA Fragment Separation by Smartphone.

    PubMed

    Tao, Chunxian; Yang, Bo; Li, Zhenqing; Zhang, Dawei; Yamaguchi, Yoshinori

    2017-06-01

    Slab gel electrophoresis (SGE) is the most common method for the separation of DNA fragments; thus, it is broadly applied to the field of biology and others. However, the traditional SGE protocol is quite tedious, and the experiment takes a long time. Moreover, the chemical consumption in SGE experiments is very high. This work proposes a simple method for the separation of DNA fragments based on an SGE chip. The chip is made by an engraving machine. Two plastic sheets are used for the excitation and emission wavelengths of the optical signal. The fluorescence signal of the DNA bands is collected by smartphone. To validate this method, 50, 100, and 1,000 bp DNA ladders were separated. The results demonstrate that a DNA ladder smaller than 5,000 bp can be resolved within 12 min and with high resolution when using this method, indicating that it is an ideal substitute for the traditional SGE method.

  20. Robust Foot Clearance Estimation Based on the Integration of Foot-Mounted IMU Acceleration Data

    PubMed Central

    Benoussaad, Mourad; Sijobert, Benoît; Mombaur, Katja; Azevedo Coste, Christine

    2015-01-01

    This paper introduces a method for the robust estimation of foot clearance during walking, using a single inertial measurement unit (IMU) placed on the subject’s foot. The proposed solution is based on double integration and drift cancellation of foot acceleration signals. The method is insensitive to misalignment of IMU axes with respect to foot axes. Details are provided regarding calibration and signal processing procedures. Experimental validation was performed on 10 healthy subjects under three walking conditions: normal, fast and with obstacles. Foot clearance estimation results were compared to measurements from an optical motion capture system. The mean error between them is significantly less than 15% under the various walking conditions. PMID:26703622

  1. Light field reconstruction robust to signal dependent noise

    NASA Astrophysics Data System (ADS)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  2. A power saving protocol for impedance spectroscopy

    NASA Astrophysics Data System (ADS)

    Bîrlea, Nicolae Marius

    2017-12-01

    Because power saving is a main concern of wearable devices we present here a transient method with a low power demand for impedance spectroscopy of the skin, but the idea is valid for other test materials. The used signal is an electrical pulse (the ON period) followed by a pause (the OFF period) when the electrodes do not consume current from the power supply. The method has the advantage of being able to measure at once the frequency characteristics of the impedance and is well suited for the time varying bioimpedance. In addition, this kind of measurement creates a more direct and explicit relationship between the lumped elements of the electrical model and the measured signal.

  3. EUROmediCAT signal detection: an evaluation of selected congenital anomaly‐medication associations

    PubMed Central

    Given, Joanne E.; Loane, Maria; Luteijn, Johannes M.; Morris, Joan K.; de Jong van den Berg, Lolkje T.W.; Garne, Ester; Addor, Marie‐Claude; Barisic, Ingeborg; de Walle, Hermien; Gatt, Miriam; Klungsoyr, Kari; Khoshnood, Babak; Latos‐Bielenska, Anna; Nelen, Vera; Neville, Amanda J.; O'Mahony, Mary; Pierini, Anna; Tucker, David; Wiesel, Awi

    2016-01-01

    Aims To evaluate congenital anomaly (CA)‐medication exposure associations produced by the new EUROmediCAT signal detection system and determine which require further investigation. Methods Data from 15 EUROCAT registries (1995–2011) with medication exposures at the chemical substance (5th level of Anatomic Therapeutic Chemical classification) and chemical subgroup (4th level) were analysed using a 50% false detection rate. After excluding antiepileptics, antidiabetics, antiasthmatics and SSRIs/psycholeptics already under investigation, 27 associations were evaluated. If evidence for a signal persisted after data validation, a literature review was conducted for prior evidence of human teratogenicity. Results Thirteen out of 27 CA‐medication exposure signals, based on 389 exposed cases, passed data validation. There was some prior evidence in the literature to support six signals (gastroschisis and levonorgestrel/ethinylestradiol (OR 4.10, 95% CI 1.70–8.53; congenital heart disease/pulmonary valve stenosis and nucleoside/tide reverse transcriptase inhibitors (OR 5.01, 95% CI 1.99–14.20/OR 28.20, 95% CI 4.63–122.24); complete absence of a limb and pregnen (4) derivatives (OR 6.60, 95% CI 1.70–22.93); hypospadias and pregnadien derivatives (OR 1.40, 95% CI 1.10–1.76); hypospadias and synthetic ovulation stimulants (OR 1.89, 95% CI 1.28–2.70). Antipropulsives produced a signal for syndactyly while the literature revealed a signal for hypospadias. There was no prior evidence to support the remaining six signals involving the ordinary salt combinations, propulsives, bulk‐forming laxatives, hydrazinophthalazine derivatives, gonadotropin releasing hormone analogues and selective serotonin agonists. Conclusion Signals which strengthened prior evidence should be prioritized for further investigation, and independent evidence sought to confirm the remaining signals. Some chance associations are expected and confounding by indication is possible. PMID:27028286

  4. Permutation Entropy and Signal Energy Increase the Accuracy of Neuropathic Change Detection in Needle EMG

    PubMed Central

    2018-01-01

    Background and Objective. Needle electromyography can be used to detect the number of changes and morphological changes in motor unit potentials of patients with axonal neuropathy. General mathematical methods of pattern recognition and signal analysis were applied to recognize neuropathic changes. This study validates the possibility of extending and refining turns-amplitude analysis using permutation entropy and signal energy. Methods. In this study, we examined needle electromyography in 40 neuropathic individuals and 40 controls. The number of turns, amplitude between turns, signal energy, and “permutation entropy” were used as features for support vector machine classification. Results. The obtained results proved the superior classification performance of the combinations of all of the above-mentioned features compared to the combinations of fewer features. The lowest accuracy from the tested combinations of features had peak-ratio analysis. Conclusion. Using the combination of permutation entropy with signal energy, number of turns and mean amplitude in SVM classification can be used to refine the diagnosis of polyneuropathies examined by needle electromyography. PMID:29606959

  5. Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.

    PubMed

    Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2018-02-15

    Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. A Protein Turnover Signaling Motif Controls the Stimulus-Sensitivity of Stress Response Pathways

    PubMed Central

    Loriaux, Paul Michael; Hoffmann, Alexander

    2013-01-01

    Stimulus-induced perturbations from the steady state are a hallmark of signal transduction. In some signaling modules, the steady state is characterized by rapid synthesis and degradation of signaling proteins. Conspicuous among these are the p53 tumor suppressor, its negative regulator Mdm2, and the negative feedback regulator of NFκB, IκBα. We investigated the physiological importance of this turnover, or flux, using a computational method that allows flux to be systematically altered independently of the steady state protein abundances. Applying our method to a prototypical signaling module, we show that flux can precisely control the dynamic response to perturbation. Next, we applied our method to experimentally validated models of p53 and NFκB signaling. We find that high p53 flux is required for oscillations in response to a saturating dose of ionizing radiation (IR). In contrast, high flux of Mdm2 is not required for oscillations but preserves p53 sensitivity to sub-saturating doses of IR. In the NFκB system, degradation of NFκB-bound IκB by the IκB kinase (IKK) is required for activation in response to TNF, while high IKK-independent degradation prevents spurious activation in response to metabolic stress or low doses of TNF. Our work identifies flux pairs with opposing functional effects as a signaling motif that controls the stimulus-sensitivity of the p53 and NFκB stress-response pathways, and may constitute a general design principle in signaling pathways. PMID:23468615

  7. Development and validation of a liquid chromatography-tandem mass spectrometry method for the determination of pyridostigmine bromide from guinea pig plasma.

    PubMed

    Needham, Shane R; Ye, Binying; Smith, J Richard; Korte, William D

    2003-11-05

    An HPLC/MS/MS method was validated for the low level analysis of pyridostigmine bromide (PB) from guinea pig plasma. An advantage of this strong-cation exchange HPLC/MS/MS method was the enhancement of the ESI-MS signal by providing good retention and good peak shape of PB with a mobile phase of 70% acetonitrile. In addition, the use of 70% acetonitrile in the mobile phase allowed the direct injection of the supernant from the protein precipitated extracted sample. The assay was linear from the range of 0.1 to 50 ng/ml using only 25 microl of sample. The precision and accuracy of the assay was better than 9.1 and 113%, respectively.

  8. Health diagnosis of arch bridge suspender by acoustic emission technique

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Ou, Jinping

    2007-01-01

    Conventional non-destructive methods can't be dynamically monitored the suspenders' damage levels and types, so acoustic emission (AE) technique is proposed to monitor its activity. The validity signals are determined by the relationship with risetime and duration. The ambient noise is eliminated using float threshold value and placing a guard sensor. The cement mortar and steel strand damage level is analyzed by AE parameter method and damage types are judged by waveform analyzing technique. Based on these methods, all the suspenders of Sichuan Ebian Dadu river arch bridge have been monitored using AE techniques. The monitoring results show that AE signal amplitude, energy, counts can visually display the suspenders' damage levels, the difference of waveform and frequency range express different damage type. The testing results are well coincide with the practical situation.

  9. Integrated Enrichment Analysis of Variants and Pathways in Genome-Wide Association Studies Indicates Central Role for IL-2 Signaling Genes in Type 1 Diabetes, and Cytokine Signaling Genes in Crohn's Disease

    PubMed Central

    Carbonetto, Peter; Stephens, Matthew

    2013-01-01

    Pathway analyses of genome-wide association studies aggregate information over sets of related genes, such as genes in common pathways, to identify gene sets that are enriched for variants associated with disease. We develop a model-based approach to pathway analysis, and apply this approach to data from the Wellcome Trust Case Control Consortium (WTCCC) studies. Our method offers several benefits over existing approaches. First, our method not only interrogates pathways for enrichment of disease associations, but also estimates the level of enrichment, which yields a coherent way to promote variants in enriched pathways, enhancing discovery of genes underlying disease. Second, our approach allows for multiple enriched pathways, a feature that leads to novel findings in two diseases where the major histocompatibility complex (MHC) is a major determinant of disease susceptibility. Third, by modeling disease as the combined effect of multiple markers, our method automatically accounts for linkage disequilibrium among variants. Interrogation of pathways from eight pathway databases yields strong support for enriched pathways, indicating links between Crohn's disease (CD) and cytokine-driven networks that modulate immune responses; between rheumatoid arthritis (RA) and “Measles” pathway genes involved in immune responses triggered by measles infection; and between type 1 diabetes (T1D) and IL2-mediated signaling genes. Prioritizing variants in these enriched pathways yields many additional putative disease associations compared to analyses without enrichment. For CD and RA, 7 of 8 additional non-MHC associations are corroborated by other studies, providing validation for our approach. For T1D, prioritization of IL-2 signaling genes yields strong evidence for 7 additional non-MHC candidate disease loci, as well as suggestive evidence for several more. Of the 7 strongest associations, 4 are validated by other studies, and 3 (near IL-2 signaling genes RAF1, MAPK14, and FYN) constitute novel putative T1D loci for further study. PMID:24098138

  10. Simplified Model to Predict Deflection and Natural Frequency of Steel Pole Structures

    NASA Astrophysics Data System (ADS)

    Balagopal, R.; Prasad Rao, N.; Rokade, R. P.

    2018-04-01

    Steel pole structures are suitable alternate to transmission line towers, due to difficulty encountered in finding land for the new right of way for installation of new lattice towers. The steel poles have tapered cross section and they are generally used for communication, power transmission and lighting purposes. Determination of deflection of steel pole is important to decide its functionality requirement. The excessive deflection of pole may affect the signal attenuation and short circuiting problems in communication/transmission poles. In this paper, a simplified method is proposed to determine both primary and secondary deflection based on dummy unit load/moment method. The predicted deflection from proposed method is validated with full scale experimental investigation conducted on 8 m and 30 m high lighting mast, 132 and 400 kV transmission pole and found to be in close agreement with each other. Determination of natural frequency is an important criterion to examine its dynamic sensitivity. A simplified semi-empirical method using the static deflection from the proposed method is formulated to determine its natural frequency. The natural frequency predicted from proposed method is validated with FE analysis results. Further the predicted results are validated with experimental results available in literature.

  11. Multifractal detrended cross-correlation analysis for two nonstationary signals.

    PubMed

    Zhou, Wei-Xing

    2008-06-01

    We propose a method called multifractal detrended cross-correlation analysis to investigate the multifractal behaviors in the power-law cross-correlations between two time series or higher-dimensional quantities recorded simultaneously, which can be applied to diverse complex systems such as turbulence, finance, ecology, physiology, geophysics, and so on. The method is validated with cross-correlated one- and two-dimensional binomial measures and multifractal random walks. As an example, we illustrate the method by analyzing two financial time series.

  12. MUSIC imaging method for electromagnetic inspection of composite multi-layers

    NASA Astrophysics Data System (ADS)

    Rodeghiero, Giacomo; Ding, Ping-Ping; Zhong, Yu; Lambert, Marc; Lesselier, Dominique

    2015-03-01

    A first-order asymptotic formulation of the electric field scattered by a small inclusion (with respect to the wavelength in dielectric regime or to the skin depth in conductive regime) embedded in composite material is given. It is validated by comparison with results obtained using a Method of Moments (MoM). A non-iterative MUltiple SIgnal Classification (MUSIC) imaging method is utilized in the same configuration to locate the position of small defects. The effectiveness of the imaging algorithm is illustrated through some numerical examples.

  13. Neural activity in the reward-related brain regions predicts implicit self-esteem: A novel validity test of psychological measures using neuroimaging.

    PubMed

    Izuma, Keise; Kennedy, Kate; Fitzjohn, Alexander; Sedikides, Constantine; Shibata, Kazuhisa

    2018-03-01

    Self-esteem, arguably the most important attitudes an individual possesses, has been a premier research topic in psychology for more than a century. Following a surge of interest in implicit attitude measures in the 90s, researchers have tried to assess self-esteem implicitly to circumvent the influence of biases inherent in explicit measures. However, the validity of implicit self-esteem measures remains elusive. Critical tests are often inconclusive, as the validity of such measures is examined in the backdrop of imperfect behavioral measures. To overcome this serious limitation, we tested the neural validity of the most widely used implicit self-esteem measure, the implicit association test (IAT). Given the conceptualization of self-esteem as attitude toward the self, and neuroscience findings that the reward-related brain regions represent an individual's attitude or preference for an object when viewing its image, individual differences in implicit self-esteem should be associated with neural signals in the reward-related regions during passive-viewing of self-face (the most obvious representation of the self). Using multi-voxel pattern analysis (MVPA) on functional MRI (fMRI) data, we demonstrate that the neural signals in the reward-related regions were robustly associated with implicit (but not explicit) self-esteem, thus providing unique evidence for the neural validity of the self-esteem IAT. In addition, both implicit and explicit self-esteem were related, although differently, to neural signals in regions involved in self-processing. Our finding highlights the utility of neuroscience methods in addressing fundamental psychological questions and providing unique insights into important psychological constructs. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. [Determination of joint contact area using MRI].

    PubMed

    Yoshida, Hidenori; Kobayashi, Koichi; Sakamoto, Makoto; Tanabe, Yuji

    2009-10-20

    Elevated contact stress on the articular joints has been hypothesized to contribute to articular cartilage wear and joint pain. However, given the limitations of using contact stress and areas from human cadaver specimens to estimate articular joint stress, there is need for an in vivo method to obtain such data. Magnetic resonance imaging (MRI) has been shown to be a valid method of quantifying the human joint contact area, indicating the potential for in vivo assessment. The purpose of this study was to describe a method of quantifying the tibiofemoral joint contact area using MRI. The validity of this technique was established in porcine cadaver specimens by comparing the contact area obtained from MRI with the contact area obtained using pressure-sensitive film (PSF). In particular, we assessed the actual condition of contact by using the ratio of signal intensity of MR images of cartilage surfaces. Two fresh porcine cadaver knees were used. A custom loading apparatus was designed to apply a compressive load to the tibiofemoral joint. We measured the contact area by using MRI and PSF methods. When the ratio of signal intensity of the cartilage surface was 0.9, the error of the contact area between the MR image and PSF was about 6%. These results suggest that this MRI method may be a valuable tool in quantifying joint contact area in vivo.

  15. Dynamic tracking down-conversion signal processing method based on reference signal for grating heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Guochao; Yan, Shuhua; Zhou, Weihong; Gu, Chenhui

    2012-08-01

    Traditional displacement measurement systems by grating, which purely make use of fringe intensity to implement fringe count and subdivision, have rigid demands for signal quality and measurement condition, so they are not easy to realize measurement with nanometer precision. Displacement measurement with the dual-wavelength and single-grating design takes advantage of the single grating diffraction theory and the heterodyne interference theory, solving quite well the contradiction between large range and high precision in grating displacement measurement. To obtain nanometer resolution and nanometer precision, high-power subdivision of interference fringes must be realized accurately. A dynamic tracking down-conversion signal processing method based on the reference signal is proposed. Accordingly, a digital phase measurement module to realize high-power subdivision on field programmable gate array (FPGA) was designed, as well as a dynamic tracking down-conversion module using phase-locked loop (PLL). Experiments validated that a carrier signal after down-conversion can constantly maintain close to 100 kHz, and the phase-measurement resolution and phase precision are more than 0.05 and 0.2 deg, respectively. The displacement resolution and the displacement precision, corresponding to the phase results, are 0.139 and 0.556 nm, respectively.

  16. Dynamic measurement of speed of sound in n-Heptane by ultrasonics during fuel injections.

    PubMed

    Minnetti, Elisa; Pandarese, Giuseppe; Evangelisti, Piersavio; Verdugo, Francisco Rodriguez; Ungaro, Carmine; Bastari, Alessandro; Paone, Nicola

    2017-11-01

    The paper presents a technique to measure the speed of sound in fuels based on pulse-echo ultrasound. The method is applied inside the test chamber of a Zeuch-type instrument used for indirect measurement of the injection rate (Mexus). The paper outlines the pulse-echo method, considering probe installation, ultrasound beam propagation inside the test chamber, typical signals obtained, as well as different processing algorithms. The method is validated in static conditions by comparing the experimental results to the NIST database both for water and n-Heptane. The ultrasonic system is synchronized to the injector so that time resolved samples of speed of sound can be successfully acquired during a series of injections. Results at different operating conditions in n-Heptane are shown. An uncertainty analysis supports the analysis of results and allows to validate the method. Experimental results show that the speed of sound variation during an injection event is less than 1%, so the Mexus model assumption to consider it constant during the injection is valid. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Time Reversal Method for Pipe Inspection with Guided Wave

    NASA Astrophysics Data System (ADS)

    Deng, Fei; He, Cunfu; Wu, Bin

    2008-02-01

    The temporal-spatial focusing effect of the time reversal method on the guided wave inspection in pipes is investigated. A steel pipe model with outer diameter of 70 mm and wall thickness of 3.5 mm is numerically built to analyse the reflection coefficient of L(0,2) mode when the time reversal method is applied in the model. According to the calculated results, it is shown that a synthetic time reversal array method is effective to improve the signal-to-noise ratio of a guided wave inspection system. As an intercepting window is widened, more energy can be included in a re-emitted signal, which leads to a large reflection coefficient of L(0,2) mode. It is also shown that when a time reversed signal is reapplied in the pipe model, by analysing the motion of the time reversed wave propagating along the pipe model, a defect can be identified. Therefore, it is demonstrated that the time reversal method can be used to locate the circumferential position of a defect in a pipe. Finally, through an experiment corresponding with the pipe model, the experimental result shows that the above-mentioned method can be valid in the inspection of a pipe.

  18. Application of optimized multiscale mathematical morphology for bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Gong, Tingkai; Yuan, Yanbin; Yuan, Xiaohui; Wu, Xiaotao

    2017-04-01

    In order to suppress noise effectively and extract the impulsive features in the vibration signals of faulty rolling element bearings, an optimized multiscale morphology (OMM) based on conventional multiscale morphology (CMM) and iterative morphology (IM) is presented in this paper. Firstly, the operator used in the IM method must be non-idempotent; therefore, an optimized difference (ODIF) operator has been designed. Furthermore, in the iterative process the current operation is performed on the basis of the previous one. This means that if a larger scale is employed, more fault features are inhibited. Thereby, a unit scale is proposed as the structuring element (SE) scale in IM. According to the above definitions, the IM method is implemented on the results over different scales obtained by CMM. The validity of the proposed method is first evaluated by a simulated signal. Subsequently, aimed at an outer race fault two vibration signals sampled by different accelerometers are analyzed by OMM and CMM, respectively. The same is done for an inner race fault. The results show that the optimized method is effective in diagnosing the two bearing faults. Compared with the CMM method, the OMM method can extract much more fault features under strong noise background.

  19. Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.

    PubMed

    Engemann, Denis A; Gramfort, Alexandre

    2015-03-01

    Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Levente

    Interpreting sensor data require knowledge about sensor placement and the surrounding environment. For a single sensor measurement, it is easy to document the context by visual observation, however for millions of sensors reporting data back to a server, the contextual information needs to be automatically extracted from either data analysis or leveraging complimentary data sources. Data layers that overlap spatially or temporally with sensor locations, can be used to extract the context and to validate the measurement. To minimize the amount of data transmitted through the internet, while preserving signal information content, two methods are explored; computation at the edgemore » and compressed sensing. We validate the above methods on wind and chemical sensor data (1) eliminate redundant measurement from wind sensors and (2) extract peak value of a chemical sensor measuring a methane plume. We present a general cloud based framework to validate sensor data based on statistical and physical modeling and contextual data extracted from geospatial data.« less

  1. On the accuracy of aerosol photoacoustic spectrometer calibrations using absorption by ozone

    NASA Astrophysics Data System (ADS)

    Davies, Nicholas W.; Cotterell, Michael I.; Fox, Cathryn; Szpek, Kate; Haywood, Jim M.; Langridge, Justin M.

    2018-04-01

    In recent years, photoacoustic spectroscopy has emerged as an invaluable tool for the accurate measurement of light absorption by atmospheric aerosol. Photoacoustic instruments require calibration, which can be achieved by measuring the photoacoustic signal generated by known quantities of gaseous ozone. Recent work has questioned the validity of this approach at short visible wavelengths (404 nm), indicating systematic calibration errors of the order of a factor of 2. We revisit this result and test the validity of the ozone calibration method using a suite of multipass photoacoustic cells operating at wavelengths 405, 514 and 658 nm. Using aerosolised nigrosin with mobility-selected diameters in the range 250-425 nm, we demonstrate excellent agreement between measured and modelled ensemble absorption cross sections at all wavelengths, thus demonstrating the validity of the ozone-based calibration method for aerosol photoacoustic spectroscopy at visible wavelengths.

  2. A SSVEP Stimuli Encoding Method Using Trinary Frequency-Shift Keying Encoded SSVEP (TFSK-SSVEP)

    PubMed Central

    Zhao, Xing; Zhao, Dechun; Wang, Xia; Hou, Xiaorong

    2017-01-01

    SSVEP is a kind of BCI technology with advantage of high information transfer rate. However, due to its nature, frequencies could be used as stimuli are scarce. To solve such problem, a stimuli encoding method which encodes SSVEP signal using Frequency Shift–Keying (FSK) method is developed. In this method, each stimulus is controlled by a FSK signal which contains three different frequencies that represent “Bit 0,” “Bit 1” and “Bit 2” respectively. Different to common BFSK in digital communication, “Bit 0” and “Bit 1” composited the unique identifier of stimuli in binary bit stream form, while “Bit 2” indicates the ending of a stimuli encoding. EEG signal is acquired on channel Oz, O1, O2, Pz, P3, and P4, using ADS1299 at the sample rate of 250 SPS. Before original EEG signal is quadrature demodulated, it is detrended and then band-pass filtered using FFT-based FIR filtering to remove interference. Valid peak of the processed signal is acquired by calculating its derivative and converted into bit stream using window method. Theoretically, this coding method could implement at least 2n−1 (n is the length of bit command) stimulus while keeping the ITR the same. This method is suitable to implement stimuli on a monitor and where the frequency and phase could be used to code stimuli is limited as well as implementing portable BCI devices which is not capable of performing complex calculations. PMID:28626393

  3. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    PubMed

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  4. Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters

    NASA Astrophysics Data System (ADS)

    Vasumathi, B.; Moorthi, S.

    2011-11-01

    In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.

  5. EXTRAPOLATION METHOD FOR MAXIMAL AND 24-H AVERAGE LTE TDD EXPOSURE ESTIMATION.

    PubMed

    Franci, D; Grillo, E; Pavoncello, S; Coltellacci, S; Buccella, C; Aureli, T

    2018-01-01

    The Long-Term Evolution (LTE) system represents the evolution of the Universal Mobile Telecommunication System technology. This technology introduces two duplex modes: Frequency Division Duplex and Time Division Duplex (TDD). Despite having experienced a limited expansion in the European countries since the debut of the LTE technology, a renewed commercial interest for LTE TDD technology has recently been shown. Therefore, the development of extrapolation procedures optimised for TDD systems becomes crucial, especially for the regulatory authorities. This article presents an extrapolation method aimed to assess the exposure to LTE TDD sources, based on the detection of the Cell-Specific Reference Signal power level. The method introduces a βTDD parameter intended to quantify the fraction of the LTE TDD frame duration reserved for downlink transmission. The method has been validated by experimental measurements performed on signals generated by both a vector signal generator and a test Base Transceiver Station installed at Linkem S.p.A facility in Rome. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. A method for environmental acoustic analysis improvement based on individual evaluation of common sources in urban areas.

    PubMed

    López-Pacheco, María G; Sánchez-Fernández, Luis P; Molina-Lozano, Herón

    2014-01-15

    Noise levels of common sources such as vehicles, whistles, sirens, car horns and crowd sounds are mixed in urban soundscapes. Nowadays, environmental acoustic analysis is performed based on mixture signals recorded by monitoring systems. These mixed signals make it difficult for individual analysis which is useful in taking actions to reduce and control environmental noise. This paper aims at separating, individually, the noise source from recorded mixtures in order to evaluate the noise level of each estimated source. A method based on blind deconvolution and blind source separation in the wavelet domain is proposed. This approach provides a basis to improve results obtained in monitoring and analysis of common noise sources in urban areas. The method validation is through experiments based on knowledge of the predominant noise sources in urban soundscapes. Actual recordings of common noise sources are used to acquire mixture signals using a microphone array in semi-controlled environments. The developed method has demonstrated great performance improvements in identification, analysis and evaluation of common urban sources. © 2013 Elsevier B.V. All rights reserved.

  7. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.

  8. A canonical correlation analysis based EMG classification algorithm for eliminating electrode shift effect.

    PubMed

    Zhe Fan; Zhong Wang; Guanglin Li; Ruomei Wang

    2016-08-01

    Motion classification system based on surface Electromyography (sEMG) pattern recognition has achieved good results in experimental condition. But it is still a challenge for clinical implement and practical application. Many factors contribute to the difficulty of clinical use of the EMG based dexterous control. The most obvious and important is the noise in the EMG signal caused by electrode shift, muscle fatigue, motion artifact, inherent instability of signal and biological signals such as Electrocardiogram. In this paper, a novel method based on Canonical Correlation Analysis (CCA) was developed to eliminate the reduction of classification accuracy caused by electrode shift. The average classification accuracy of our method were above 95% for the healthy subjects. In the process, we validated the influence of electrode shift on motion classification accuracy and discovered the strong correlation with correlation coefficient of >0.9 between shift position data and normal position data.

  9. A Bayesian Active Learning Experimental Design for Inferring Signaling Networks.

    PubMed

    Ness, Robert O; Sachs, Karen; Mallick, Parag; Vitek, Olga

    2018-06-21

    Machine learning methods for learning network structure are applied to quantitative proteomics experiments and reverse-engineer intracellular signal transduction networks. They provide insight into the rewiring of signaling within the context of a disease or a phenotype. To learn the causal patterns of influence between proteins in the network, the methods require experiments that include targeted interventions that fix the activity of specific proteins. However, the interventions are costly and add experimental complexity. We describe an active learning strategy for selecting optimal interventions. Our approach takes as inputs pathway databases and historic data sets, expresses them in form of prior probability distributions on network structures, and selects interventions that maximize their expected contribution to structure learning. Evaluations on simulated and real data show that the strategy reduces the detection error of validated edges as compared with an unguided choice of interventions and avoids redundant interventions, thereby increasing the effectiveness of the experiment.

  10. Precession missile feature extraction using sparse component analysis of radar measurements

    NASA Astrophysics Data System (ADS)

    Liu, Lihua; Du, Xiaoyong; Ghogho, Mounir; Hu, Weidong; McLernon, Des

    2012-12-01

    According to the working mode of the ballistic missile warning radar (BMWR), the radar return from the BMWR is usually sparse. To recognize and identify the warhead, it is necessary to extract the precession frequency and the locations of the scattering centers of the missile. This article first analyzes the radar signal model of the precessing conical missile during flight and develops the sparse dictionary which is parameterized by the unknown precession frequency. Based on the sparse dictionary, the sparse signal model is then established. A nonlinear least square estimation is first applied to roughly extract the precession frequency in the sparse dictionary. Based on the time segmented radar signal, a sparse component analysis method using the orthogonal matching pursuit algorithm is then proposed to jointly estimate the precession frequency and the scattering centers of the missile. Simulation results illustrate the validity of the proposed method.

  11. A general method for assessing brain-computer interface performance and its limitations

    NASA Astrophysics Data System (ADS)

    Hill, N. Jeremy; Häuser, Ann-Katrin; Schalk, Gerwin

    2014-04-01

    Objective. When researchers evaluate brain-computer interface (BCI) systems, we want quantitative answers to questions such as: How good is the system’s performance? How good does it need to be? and: Is it capable of reaching the desired level in future? In response to the current lack of objective, quantitative, study-independent approaches, we introduce methods that help to address such questions. We identified three challenges: (I) the need for efficient measurement techniques that adapt rapidly and reliably to capture a wide range of performance levels; (II) the need to express results in a way that allows comparison between similar but non-identical tasks; (III) the need to measure the extent to which certain components of a BCI system (e.g. the signal processing pipeline) not only support BCI performance, but also potentially restrict the maximum level it can reach. Approach. For challenge (I), we developed an automatic staircase method that adjusted task difficulty adaptively along a single abstract axis. For challenge (II), we used the rate of information gain between two Bernoulli distributions: one reflecting the observed success rate, the other reflecting chance performance estimated by a matched random-walk method. This measure includes Wolpaw’s information transfer rate as a special case, but addresses the latter’s limitations including its restriction to item-selection tasks. To validate our approach and address challenge (III), we compared four healthy subjects’ performance using an EEG-based BCI, a ‘Direct Controller’ (a high-performance hardware input device), and a ‘Pseudo-BCI Controller’ (the same input device, but with control signals processed by the BCI signal processing pipeline). Main results. Our results confirm the repeatability and validity of our measures, and indicate that our BCI signal processing pipeline reduced attainable performance by about 33% (21 bits min-1). Significance. Our approach provides a flexible basis for evaluating BCI performance and its limitations, across a wide range of tasks and task difficulties.

  12. Large-Signal Lyapunov-Based Stability Analysis of DC/AC Inverters and Inverter-Based Microgrids

    NASA Astrophysics Data System (ADS)

    Kabalan, Mahmoud

    Microgrid stability studies have been largely based on small-signal linearization techniques. However, the validity and magnitude of the linearization domain is limited to small perturbations. Thus, there is a need to examine microgrids with large-signal nonlinear techniques to fully understand and examine their stability. Large-signal stability analysis can be accomplished by Lyapunov-based mathematical methods. These Lyapunov methods estimate the domain of asymptotic stability of the studied system. A survey of Lyapunov-based large-signal stability studies showed that few large-signal studies have been completed on either individual systems (dc/ac inverters, dc/dc rectifiers, etc.) or microgrids. The research presented in this thesis addresses the large-signal stability of droop-controlled dc/ac inverters and inverter-based microgrids. Dc/ac power electronic inverters allow microgrids to be technically feasible. Thus, as a prelude to examining the stability of microgrids, the research presented in Chapter 3 analyzes the stability of inverters. First, the 13 th order large-signal nonlinear model of a droop-controlled dc/ac inverter connected to an infinite bus is presented. The singular perturbation method is used to decompose the nonlinear model into 11th, 9th, 7th, 5th, 3rd and 1st order models. Each model ignores certain control or structural components of the full order model. The aim of the study is to understand the accuracy and validity of the reduced order models in replicating the performance of the full order nonlinear model. The performance of each model is studied in three different areas: time domain simulations, Lyapunov's indirect method and domain of attraction estimation. The work aims to present the best model to use in each of the three domains of study. Results show that certain reduced order models are capable of accurately reproducing the performance of the full order model while others can be used to gain insights into those three areas of study. This will enable future studies to save computational effort and produce the most accurate results according to the needs of the study being performed. Moreover, the effect of grid (line) impedance on the accuracy of droop control is explored using the 5th order model. Simulation results show that traditional droop control is valid up to R/X line impedance value of 2. Furthermore, the 3rd order nonlinear model improves the currently available inverter-infinite bus models by accounting for grid impedance, active power-frequency droop and reactive power-voltage droop. Results show the 3rd order model's ability to account for voltage and reactive power changes during a transient event. Finally, the large-signal Lyapunov-based stability analysis is completed for a 3 bus microgrid system (made up of 2 inverters and 1 linear load). The thesis provides a systematic state space large-signal nonlinear mathematical modeling method of inverter-based microgrids. The inverters include the dc-side dynamics associated with dc sources. The mathematical model is then used to estimate the domain of asymptotic stability of the 3 bus microgrid. The three bus microgrid system was used as a case study to highlight the design and optimization capability of a large-signal-based approach. The study explores the effect of system component sizing, load transient and generation variations on the asymptotic stability of the microgrid. Essentially, this advancement gives microgrid designers and engineers the ability to manipulate the domain of asymptotic stability depending on performance requirements. Especially important, this research was able to couple the domain of asymptotic stability of the ac microgrid with that of the dc side voltage source. Time domain simulations were used to demonstrate the mathematical nonlinear analysis results.

  13. Capillary red blood cell velocimetry by phase-resolved optical coherence tomography.

    PubMed

    Tang, Jianbo; Erdener, Sefik Evren; Fu, Buyin; Boas, David A

    2017-10-01

    We present a phase-resolved optical coherence tomography (OCT) method to extend Doppler OCT for the accurate measurement of the red blood cell (RBC) velocity in cerebral capillaries. OCT data were acquired with an M-mode scanning strategy (repeated A-scans) to account for the single-file passage of RBCs in a capillary, which were then high-pass filtered to remove the stationary component of the signal to ensure an accurate measurement of phase shift of flowing RBCs. The angular frequency of the signal from flowing RBCs was then quantified from the dynamic component of the signal and used to calculate the axial speed of flowing RBCs in capillaries. We validated our measurement by RBC passage velocimetry using the signal magnitude of the same OCT time series data.

  14. Performance Investigation of Millimeter Wave Generation Reliant on Stimulated Brillouin Scattering

    NASA Astrophysics Data System (ADS)

    Tickoo, Sheetal; Gupta, Amit

    2018-04-01

    In this work, photonic method of generating the millimeter waves has been done based on Brillouin scattering effect in optical fiber. Here different approaches are proposed to get maximum frequency shift in mm-wave region using only pumps, radio signals with Mach-Zehnder modulator. Moreover for generated signal validation, signals modulated and send to both wired and wireless medium in optical domain. It is observed that maximum shift of 300 GHz is realized using 60 GHz input sine wave. Basically a frequency doubler is proposed which double shift of input frequency and provide better SNR. For the future generation network system, the generation of millimeter waves makes them well reliable for the transmission of the data.

  15. Imagination and society: the role of visual sociology.

    PubMed

    Cipriani, Roberto; Del Re, Emanuela C

    2012-10-01

    The paper presents the field of Visual Sociology as an approach that makes use of photographs, films, documentaries, videos, to capture and assess aspects of social life and social signals. It overviews some relevant works in the field, it deals with methodological and epistemological issues, by raising the question of the relation between the observer and the observed, and makes reference to some methods of analysis, such as those proposed by the Grounded Theory, and to some connected tools for automatic qualitative analysis, like NVivo. The relevance of visual sociology to the study of social signals lies in the fact that it can validly integrate the information, introducing a multi-modal approach in the analysis of social signals.

  16. Desired Accuracy Estimation of Noise Function from ECG Signal by Fuzzy Approach

    PubMed Central

    Vahabi, Zahra; Kermani, Saeed

    2012-01-01

    Unknown noise and artifacts present in medical signals with non-linear fuzzy filter will be estimated and then removed. An adaptive neuro-fuzzy interference system which has a non-linear structure presented for the noise function prediction by before Samples. This paper is about a neuro-fuzzy method to estimate unknown noise of Electrocardiogram signal. Adaptive neural combined with Fuzzy System to construct a fuzzy Predictor. For this system setting parameters such as the number of Membership Functions for each input and output, training epochs, type of MFs for each input and output, learning algorithm and etc. is determined by learning data. At the end simulated experimental results are presented for proper validation. PMID:23717810

  17. Detection and recognition of mechanical, digging and vehicle signals in the optical fiber pre-warning system

    NASA Astrophysics Data System (ADS)

    Tian, Qing; Yang, Dan; Zhang, Yuan; Qu, Hongquan

    2018-04-01

    This paper presents detection and recognition method to locate and identify harmful intrusions in the optical fiber pre-warning system (OFPS). Inspired by visual attention architecture (VAA), the process flow is divided into two parts, i.e., data-driven process and task-driven process. At first, data-driven process takes all the measurements collected by the system as input signals, which is handled by detection method to locate the harmful intrusion in both spatial domain and time domain. Then, these detected intrusion signals are taken over by task-driven process. Specifically, we get pitch period (PP) and duty cycle (DC) of the intrusion signals to identify the mechanical and manual digging (MD) intrusions respectively. For the passing vehicle (PV) intrusions, their strong low frequency component can be used as good feature. In generally, since the harmful intrusion signals only account for a small part of whole measurements, the data-driven process reduces the amount of input data for subsequent task-driven process considerably. Furthermore, the task-driven process determines the harmful intrusions orderly according to their severity, which makes a priority mechanism for the system as well as targeted processing for different harmful intrusion. At last, real experiments are performed to validate the effectiveness of this method.

  18. Heart rate detection from an electronic weighing scale.

    PubMed

    González-Landaeta, R; Casas, O; Pallàs-Areny, R

    2007-01-01

    We propose a novel technique for heart rate detection on a subject that stands on a common electronic weighing scale. The detection relies on sensing force variations related to the blood acceleration in the aorta, works even if wearing footwear, and does not require any sensors attached to the body. We have applied our method to three different weighing scales, and estimated whether their sensitivity and frequency response suited heart rate detection. Scale sensitivities were from 490 nV/V/N to 1670 nV/V/N, all had an underdamped transient response and their dynamic gain error was below 19% at 10 Hz, which are acceptable values for heart rate estimation. We also designed a pulse detection system based on off-the-shelf integrated circuits, whose gain was about 70x10(3) and able to sense force variations about 240 mN. The signal-to-noise ratio (SNR) of the main peaks of the pulse signal detected was higher than 48 dB, which is large enough to estimate the heart rate by simple signal processing methods. To validate the method, the ECG and the force signal were simultaneously recorded on 12 volunteers. The maximal error obtained from heart rates determined from these two signals was +/-0.6 beats/minute.

  19. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Zhao; Gao, Kun; Chen, Jian

    2015-02-15

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using themore » error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations.« less

  20. Simultaneous spectrophotometric determination of indacaterol and glycopyrronium in a newly approved pharmaceutical formulation using different signal processing techniques of ratio spectra

    NASA Astrophysics Data System (ADS)

    Abdel Ghany, Maha F.; Hussein, Lobna A.; Magdy, Nancy; Yamani, Hend Z.

    2016-03-01

    Three spectrophotometric methods have been developed and validated for determination of indacaterol (IND) and glycopyrronium (GLY) in their binary mixtures and novel pharmaceutical dosage form. The proposed methods are considered to be the first methods to determine the investigated drugs simultaneously. The developed methods are based on different signal processing techniques of ratio spectra namely; Numerical Differentiation (ND), Savitsky-Golay (SG) and Fourier Transform (FT). The developed methods showed linearity over concentration range 1-30 and 10-35 (μg/mL) for IND and GLY, respectively. The accuracy calculated as percentage recoveries were in the range of 99.00%-100.49% with low value of RSD% (< 1.5%) demonstrating an excellent accuracy of the proposed methods. The developed methods were proved to be specific, sensitive and precise for quality control of the investigated drugs in their pharmaceutical dosage form without the need for any separation process.

  1. A Bioassay System Using Bioelectric Signals from Small Fish

    NASA Astrophysics Data System (ADS)

    Terawaki, Mitsuru; Soh, Zu; Hirano, Akira; Tsuji, Toshio

    Although the quality of tap water is generally examined using chemical assay, this method cannot be used for examination in real time. Against such a background, the technique of fish bioassay has attracted attention as an approach that enables constant monitoring of aquatic contamination. The respiratory rhythms of fish are considered an efficient indicator for the ongoing assessment of water quality, since they are sensitive to chemicals and can be indirectly measured from bioelectric signals generated by breathing. In order to judge aquatic contamination accurately, it is necessary to measure bioelectric signals from fish swimming freely as well as to stably discriminate measured signals, which vary between individuals. However, no bioassay system meeting the above requirements has yet been established. This paper proposes a bioassay system using bioelectric signals generated from small fish in free-swimming conditions. The system records signals using multiple electrodes to cover the extensive measurement range required in a free-swimming environment, and automatically discriminates changes in water quality from signal frequency components. This discrimination is achieved through an ensemble classification method using probability neural networks to solve the problem of differences between individual fish. The paper also reports on the results of related validation experiments, which showed that the proposed system was able to stably discriminate between water conditions before and after bleach exposure.

  2. Unveiling the Biometric Potential of Finger-Based ECG Signals

    PubMed Central

    Lourenço, André; Silva, Hugo; Fred, Ana

    2011-01-01

    The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications. PMID:21837235

  3. Unveiling the biometric potential of finger-based ECG signals.

    PubMed

    Lourenço, André; Silva, Hugo; Fred, Ana

    2011-01-01

    The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.

  4. Monte Carlo investigation of transient acoustic fields in partially or completely bounded medium. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thanedar, B. D.

    1972-01-01

    A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.

  5. VALFAST: Secure Probabilistic Validation of Hundreds of Kepler Planet Candidates

    NASA Astrophysics Data System (ADS)

    Morton, Tim; Petigura, E.; Johnson, J. A.; Howard, A.; Marcy, G. W.; Baranec, C.; Law, N. M.; Riddle, R. L.; Ciardi, D. R.; Robo-AO Team

    2014-01-01

    The scope, scale, and tremendous success of the Kepler mission has necessitated the rapid development of probabilistic validation as a new conceptual framework for analyzing transiting planet candidate signals. While several planet validation methods have been independently developed and presented in the literature, none has yet come close to addressing the entire Kepler survey. I present the results of applying VALFAST---a planet validation code based on the methodology described in Morton (2012)---to every Kepler Object of Interest. VALFAST is unique in its combination of detail, completeness, and speed. Using the transit light curve shape, realistic population simulations, and (optionally) diverse follow-up observations, it calculates the probability that a transit candidate signal is the result of a true transiting planet or any of a number of astrophysical false positive scenarios, all in just a few minutes on a laptop computer. In addition to efficiently validating the planetary nature of hundreds of new KOIs, this broad application of VALFAST also demonstrates its ability to reliably identify likely false positives. This extensive validation effort is also the first to incorporate data from all of the largest Kepler follow-up observing efforts: the CKS survey of ~1000 KOIs with Keck/HIRES, the Robo-AO survey of >1700 KOIs, and high-resolution images obtained through the Kepler Follow-up Observing Program. In addition to enabling the core science that the Kepler mission was designed for, this methodology will be critical to obtain statistical results from future surveys such as TESS and PLATO.

  6. A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.

    PubMed

    Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L

    2016-10-01

    Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.

  7. A new approach to harmonic elimination based on a real-time comparison method

    NASA Astrophysics Data System (ADS)

    Gourisetti, Sri Nikhil Gupta

    Undesired harmonics are responsible for noise in a transmission channel, power loss in power electronics and in motor control. Selective Harmonic Elimination (SHE) is a well-known method used to eliminate or suppress the unwanted harmonics between the fundamental and the carrier frequency harmonic/component. But SHE bears the disadvantage of its incapability to use in real-time applications. A novel reference-carrier comparative method has been developed which can be used to generate an SPWM signal to apply in real-time systems. A modified carrier signal is designed and tested for different carrier frequencies based on the generated SPWM FFT. The carrier signal may change for different fundamental to carrier ratio that leads to solving the equations each time. An analysis to find all possible solutions for a particular carrier frequency and fundamental amplitude is performed and found. This proves that there is no one global maxima instead several local maximas exists for a particular condition set that makes this method less sensitive. Additionally, an attempt to find a universal solution that is valid for any carrier signal with predefined fundamental amplitude is performed. A uniform distribution Monte-Carlo sensitivity analysis is performed to measure the window i.e., best and worst possible solutions. The simulations are performed using MATLAB and are justified with experimental results.

  8. Detection and Identification of Multiple Stationary Human Targets Via Bio-Radar Based on the Cross-Correlation Method

    PubMed Central

    Zhang, Yang; Chen, Fuming; Xue, Huijun; Li, Zhao; An, Qiang; Wang, Jianqi; Zhang, Yang

    2016-01-01

    Ultra-wideband (UWB) radar has been widely used for detecting human physiological signals (respiration, movement, etc.) in the fields of rescue, security, and medicine owing to its high penetrability and range resolution. In these applications, especially in rescue after disaster (earthquake, collapse, mine accident, etc.), the presence, number, and location of the trapped victims to be detected and rescued are the key issues of concern. Ample research has been done on the first issue, whereas the identification and localization of multi-targets remains a challenge. False positive and negative identification results are two common problems associated with the detection of multiple stationary human targets. This is mainly because the energy of the signal reflected from the target close to the receiving antenna is considerably stronger than those of the targets at further range, often leading to missing or false recognition if the identification method is based on the energy of the respiratory signal. Therefore, a novel method based on cross-correlation is proposed in this paper that is based on the relativity and periodicity of the signals, rather than on the energy. The validity of this method is confirmed through experiments using different scenarios; the results indicate a discernible improvement in the detection precision and identification of the multiple stationary targets. PMID:27801795

  9. Model-free and analytical EAP reconstruction via spherical polar Fourier diffusion MRI.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2010-01-01

    How to estimate the diffusion Ensemble Average Propagator (EAP) from the DWI signals in q-space is an open problem in diffusion MRI field. Many methods were proposed to estimate the Orientation Distribution Function (ODF) that is used to describe the fiber direction. However, ODF is just one of the features of the EAP. Compared with ODF, EAP has the full information about the diffusion process which reflects the complex tissue micro-structure. Diffusion Orientation Transform (DOT) and Diffusion Spectrum Imaging (DSI) are two important methods to estimate the EAP from the signal. However, DOT is based on mono-exponential assumption and DSI needs a lot of samplings and very large b values. In this paper, we propose Spherical Polar Fourier Imaging (SPFI), a novel model-free fast robust analytical EAP reconstruction method, which almost does not need any assumption of data and does not need too many samplings. SPFI naturally combines the DWI signals with different b-values. It is an analytical linear transformation from the q-space signal to the EAP profile represented by Spherical Harmonics (SH). We validated the proposed methods in synthetic data, phantom data and real data. It works well in all experiments, especially for the data with low SNR, low anisotropy, and non-exponential decay.

  10. Detection and Identification of Multiple Stationary Human Targets Via Bio-Radar Based on the Cross-Correlation Method.

    PubMed

    Zhang, Yang; Chen, Fuming; Xue, Huijun; Li, Zhao; An, Qiang; Wang, Jianqi; Zhang, Yang

    2016-10-27

    Ultra-wideband (UWB) radar has been widely used for detecting human physiological signals (respiration, movement, etc.) in the fields of rescue, security, and medicine owing to its high penetrability and range resolution. In these applications, especially in rescue after disaster (earthquake, collapse, mine accident, etc.), the presence, number, and location of the trapped victims to be detected and rescued are the key issues of concern. Ample research has been done on the first issue, whereas the identification and localization of multi-targets remains a challenge. False positive and negative identification results are two common problems associated with the detection of multiple stationary human targets. This is mainly because the energy of the signal reflected from the target close to the receiving antenna is considerably stronger than those of the targets at further range, often leading to missing or false recognition if the identification method is based on the energy of the respiratory signal. Therefore, a novel method based on cross-correlation is proposed in this paper that is based on the relativity and periodicity of the signals, rather than on the energy. The validity of this method is confirmed through experiments using different scenarios; the results indicate a discernible improvement in the detection precision and identification of the multiple stationary targets.

  11. Determining dark matter properties with a XENONnT/LZ signal and LHC Run 3 monojet searches

    NASA Astrophysics Data System (ADS)

    Baum, Sebastian; Catena, Riccardo; Conrad, Jan; Freese, Katherine; Krauss, Martin B.

    2018-04-01

    We develop a method to forecast the outcome of the LHC Run 3 based on the hypothetical detection of O (100 ) signal events at XENONnT. Our method relies on a systematic classification of renormalizable single-mediator models for dark matter-quark interactions and is valid for dark matter candidates of spin less than or equal to one. Applying our method to simulated data, we find that at the end of the LHC Run 3 only two mutually exclusive scenarios would be compatible with the detection of O (100 ) signal events at XENONnT. In the first scenario, the energy distribution of the signal events is featureless, as for canonical spin-independent interactions. In this case, if a monojet signal is detected at the LHC, dark matter must have spin 1 /2 and interact with nucleons through a unique velocity-dependent operator. If a monojet signal is not detected, dark matter interacts with nucleons through canonical spin-independent interactions. In a second scenario, the spectral distribution of the signal events exhibits a bump at nonzero recoil energies. In this second case, a monojet signal can be detected at the LHC Run 3; dark matter must have spin 1 /2 and interact with nucleons through a unique momentum-dependent operator. We therefore conclude that the observation of O (100 ) signal events at XENONnT combined with the detection, or the lack of detection, of a monojet signal at the LHC Run 3 would significantly narrow the range of possible dark matter-nucleon interactions. As we argued above, it can also provide key information on the dark matter particle spin.

  12. Accurate identification of motor unit discharge patterns from high-density surface EMG and validation with a novel signal-based performance metric

    NASA Astrophysics Data System (ADS)

    Holobar, A.; Minetto, M. A.; Farina, D.

    2014-02-01

    Objective. A signal-based metric for assessment of accuracy of motor unit (MU) identification from high-density surface electromyograms (EMG) is introduced. This metric, so-called pulse-to-noise-ratio (PNR), is computationally efficient, does not require any additional experimental costs and can be applied to every MU that is identified by the previously developed convolution kernel compensation technique. Approach. The analytical derivation of the newly introduced metric is provided, along with its extensive experimental validation on both synthetic and experimental surface EMG signals with signal-to-noise ratios ranging from 0 to 20 dB and muscle contraction forces from 5% to 70% of the maximum voluntary contraction. Main results. In all the experimental and simulated signals, the newly introduced metric correlated significantly with both sensitivity and false alarm rate in identification of MU discharges. Practically all the MUs with PNR > 30 dB exhibited sensitivity >90% and false alarm rates <2%. Therefore, a threshold of 30 dB in PNR can be used as a simple method for selecting only reliably decomposed units. Significance. The newly introduced metric is considered a robust and reliable indicator of accuracy of MU identification. The study also shows that high-density surface EMG can be reliably decomposed at contraction forces as high as 70% of the maximum.

  13. A contactless approach for respiratory gating in PET using continuous-wave radar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ersepke, Thomas, E-mail: Thomas.Ersepke@rub.de; Büther, Florian; Heß, Mirco

    Purpose: Respiratory gating is commonly used to reduce motion artifacts in positron emission tomography (PET). Clinically established methods for respiratory gating in PET require contact to the patient or a direct optical line between the sensor and the patient’s torso and time consuming preparation. In this work, a contactless method for capturing a respiratory signal during PET is presented based on continuous-wave radar. Methods: The proposed method relies on the principle of emitting an electromagnetic wave and detecting the phase shift of the reflected wave, modulated due to the respiratory movement of the patient’s torso. A 24 GHz carrier frequencymore » was chosen allowing wave propagation through plastic and clothing with high reflections at the skin surface. A detector module and signal processing algorithms were developed to extract a quantitative respiratory signal. The sensor was validated using a high precision linear table. During volunteer measurements and [{sup 18}F] FDG PET scans, the radar sensor was positioned inside the scanner bore of a PET/computed tomography scanner. As reference, pressure belt (one volunteer), depth camera-based (two volunteers, two patients), and PET data-driven (six patients) signals were acquired simultaneously and the signal correlation was quantified. Results: The developed system demonstrated a high measurement accuracy for movement detection within the submillimeter range. With the proposed method, small displacements of 25 μm could be detected, not considerably influenced by clothing or blankets. From the patient studies, the extracted respiratory radar signals revealed high correlation (Pearson correlation coefficient) to those derived from the external pressure belt and depth camera signals (r = 0.69–0.99) and moderate correlation to those of the internal data-driven signals (r = 0.53–0.70). In some cases, a cardiac signal could be visualized, due to the representation of the mechanical heart motion on the skin. Conclusions: Accurate respiratory signals were obtained successfully by the proposed method with high spatial and temporal resolution. By working without contact and passing through clothing and blankets, this approach minimizes preparation time and increases the convenience of the patient during the scan.« less

  14. Guided Lamb wave based 2-D spiral phased array for structural health monitoring of thin panel structures

    NASA Astrophysics Data System (ADS)

    Yoo, Byungseok

    2011-12-01

    In almost all industries of mechanical, aerospace, and civil engineering fields, structural health monitoring (SHM) technology is essentially required for providing the reliable information of structural integrity of safety-critical structures, which can help reduce the risk of unexpected and sometimes catastrophic failures, and also offer cost-effective inspection and maintenance of the structures. State of the art SHM research on structural damage diagnosis is focused on developing global and real-time technologies to identify the existence, location, extent, and type of damage. In order to detect and monitor the structural damage in plate-like structures, SHM technology based on guided Lamb wave (GLW) interrogation is becoming more attractive due to its potential benefits such as large inspection area coverage in short time, simple inspection mechanism, and sensitivity to small damage. However, the GLW method has a few critical issues such as dispersion nature, mode conversion and separation, and multiple-mode existence. Phased array technique widely used in all aspects of civil, military, science, and medical industry fields may be employed to resolve the drawbacks of the GLW method. The GLW-based phased array approach is able to effectively examine and analyze complicated structural vibration responses in thin plate structures. Because the phased sensor array operates as a spatial filter for the GLW signals, the array signal processing method can enhance a desired signal component at a specific direction while eliminating other signal components from other directions. This dissertation presents the development, the experimental validation, and the damage detection applications of an innovative signal processing algorithm based on two-dimensional (2-D) spiral phased array in conjunction with the GLW interrogation technique. It starts with general backgrounds of SHM and the associated technology including the GLW interrogation method. Then, it is focused on the fundamentals of the GLW-based phased array approach and the development of an innovative signal processing algorithm associated with the 2-D spiral phased sensor array. The SHM approach based on array responses determined by the proposed phased array algorithm implementation is addressed. The experimental validation of the GLW-based 2-D spiral phased array technology and the associated damage detection applications to thin isotropic plate and anisotropic composite plate structures are presented.

  15. An operational modal analysis method in frequency and spatial domain

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Zhang, Lingmi; Tamura, Yukio

    2005-12-01

    A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.

  16. Signal template generation from acquired mammographic images for the non-prewhitening model observer with eye-filter

    NASA Astrophysics Data System (ADS)

    Balta, Christiana; Bouwman, Ramona W.; Sechopoulos, Ioannis; Broeders, Mireille J. M.; Karssemeijer, Nico; van Engen, Ruben E.; Veldkamp, Wouter J. H.

    2017-03-01

    Model observers (MOs) are being investigated for image quality assessment in full-field digital mammography (FFDM). Signal templates for the non-prewhitening MO with eye filter (NPWE) were formed using acquired FFDM images. A signal template was generated from acquired images by averaging multiple exposures resulting in a low noise signal template. Noise elimination while preserving the signal was investigated and a methodology which results in a noise-free template is proposed. In order to deal with signal location uncertainty, template shifting was implemented. The procedure to generate the template was evaluated on images of an anthropomorphic breast phantom containing microcalcification-related signals. Optimal reduction of the background noise was achieved without changing the signal. Based on a validation study in simulated images, the difference (bias) in MO performance from the ground truth signal was calculated and found to be <1%. As template generation is a building stone of the entire image quality assessment framework, the proposed method to construct templates from acquired images facilitates the use of the NPWE MO in acquired images.

  17. Probing multi-scale self-similarity of tissue structures using light scattering spectroscopy: prospects in pre-cancer detection

    NASA Astrophysics Data System (ADS)

    Chatterjee, Subhasri; Das, Nandan K.; Kumar, Satish; Mohapatra, Sonali; Pradhan, Asima; Panigrahi, Prasanta K.; Ghosh, Nirmalya

    2013-02-01

    Multi-resolution analysis on the spatial refractive index inhomogeneities in the connective tissue regions of human cervix reveals clear signature of multifractality. We have thus developed an inverse analysis strategy for extraction and quantification of the multifractality of spatial refractive index fluctuations from the recorded light scattering signal. The method is based on Fourier domain pre-processing of light scattering data using Born approximation, and its subsequent analysis through Multifractal Detrended Fluctuation Analysis model. The method has been validated on several mono- and multi-fractal scattering objects whose self-similar properties are user controlled and known a-priori. Following successful validation, this approach has initially been explored for differentiating between different grades of precancerous human cervical tissues.

  18. Time series modeling of human operator dynamics in manual control tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.

  19. Time Series Modeling of Human Operator Dynamics in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.

  20. Predicting pathway cross-talks in ankylosing spondylitis through investigating the interactions among pathways.

    PubMed

    Gu, Xiang; Liu, Cong-Jian; Wei, Jian-Jie

    2017-11-13

    Given that the pathogenesis of ankylosing spondylitis (AS) remains unclear, the aim of this study was to detect the potentially functional pathway cross-talk in AS to further reveal the pathogenesis of this disease. Using microarray profile of AS and biological pathways as study objects, Monte Carlo cross-validation method was used to identify the significant pathway cross-talks. In the process of Monte Carlo cross-validation, all steps were iterated 50 times. For each run, detection of differentially expressed genes (DEGs) between two groups was conducted. The extraction of the potential disrupted pathways enriched by DEGs was then implemented. Subsequently, we established a discriminating score (DS) for each pathway pair according to the distribution of gene expression levels. After that, we utilized random forest (RF) classification model to screen out the top 10 paired pathways with the highest area under the curve (AUCs), which was computed using 10-fold cross-validation approach. After 50 bootstrap, the best pairs of pathways were identified. According to their AUC values, the pair of pathways, antigen presentation pathway and fMLP signaling in neutrophils, achieved the best AUC value of 1.000, which indicated that this pathway cross-talk could distinguish AS patients from normal subjects. Moreover, the paired pathways of SAPK/JNK signaling and mitochondrial dysfunction were involved in 5 bootstraps. Two paired pathways (antigen presentation pathway and fMLP signaling in neutrophil, as well as SAPK/JNK signaling and mitochondrial dysfunction) can accurately distinguish AS and control samples. These paired pathways may be helpful to identify patients with AS for early intervention.

  1. Hsa-miR-195 targets PCMT1 in hepatocellular carcinoma that increases tumor life span.

    PubMed

    Amer, Marwa; Elhefnawi, M; El-Ahwany, Eman; Awad, A F; Gawad, Nermen Abdel; Zada, Suher; Tawab, F M Abdel

    2014-11-01

    MicroRNAs are small 19-25 nucleotides which have been shown to play important roles in the regulation of gene expression in many organisms. Downregulation or accumulation of miRNAs implies either tumor suppression or oncogenic activation. In this study, differentially expressed hsa-miR-195 in hepatocellular carcinoma (HCC) was identified and analyzed. The prediction was done using a consensus approach of tools. The validation steps were done at two different levels in silico and in vitro. FGF7, GHR, PCMT1, CITED2, PEX5, PEX13, NOVA1, AXIN2, and TSPYL2 were detected with high significant (P < 0.005). These genes are involved in important pathways in cancer like MAPK signaling pathway, Jak-STAT signaling pathways, regulation of actin cytoskeleton, angiogenesis, Wnt signaling pathway, and TGF-beta signaling pathway. In vitro target validation was done for protein-L-isoaspartate (D-aspartate) O-methyltransferase (PCMT1). The co-transfection of pmirGLO-PCMT1 and pEGP-miR-195 showed highly significant results. Firefly luciferase was detected using Lumiscensor and t test analysis was done. Firefly luciferase expression was significantly decreased (P < 0.001) in comparison to the control. The low expression of firefly luciferase validates the method of target prediction that we used in this work by working on PCMT1 as a target for miR-195. Furthermore, the rest of the predicted genes are suspected to be real targets for hsa-miR-195. These target genes control almost all the hallmarks of liver cancer which can be used as therapeutic targets in cancer treatment.

  2. Determination of the optimal number of components in independent components analysis.

    PubMed

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Mapping the pathways of resistance to targeted therapies

    PubMed Central

    Wood, Kris C.

    2015-01-01

    Resistance substantially limits the depth and duration of clinical responses to targeted anticancer therapies. Through the use of complementary experimental approaches, investigators have revealed that cancer cells can achieve resistance through adaptation or selection driven by specific genetic, epigenetic, or microenvironmental alterations. Ultimately, these diverse alterations often lead to the activation of signaling pathways that, when co-opted, enable cancer cells to survive drug treatments. Recently developed methods enable the direct and scalable identification of the signaling pathways capable of driving resistance in specific contexts. Using these methods, novel pathways of resistance to clinically approved drugs have been identified and validated. By combining systematic resistance pathway mapping methods with studies revealing biomarkers of specific resistance pathways and pharmacological approaches to block these pathways, it may be possible to rationally construct drug combinations that yield more penetrant and lasting responses in patients. PMID:26392071

  4. Validation of brain-derived signals in near-infrared spectroscopy through multivoxel analysis of concurrent functional magnetic resonance imaging.

    PubMed

    Moriguchi, Yoshiya; Noda, Takamasa; Nakayashiki, Kosei; Takata, Yohei; Setoyama, Shiori; Kawasaki, Shingo; Kunisato, Yoshihiko; Mishima, Kazuo; Nakagome, Kazuyuki; Hanakawa, Takashi

    2017-10-01

    Near-infrared spectroscopy (NIRS) is a convenient and safe brain-mapping tool. However, its inevitable confounding with hemodynamic responses outside the brain, especially in the frontotemporal head, has questioned its validity. Some researchers attempted to validate NIRS signals through concurrent measurements with functional magnetic resonance imaging (fMRI), but, counterintuitively, NIRS signals rarely correlate with local fMRI signals in NIRS channels, although both mapping techniques should measure the same hemoglobin concentration. Here, we tested a novel hypothesis that different voxels within the scalp and the brain tissues might have substantially different hemoglobin absorption rates of near-infrared light, which might differentially contribute to NIRS signals across channels. Therefore, we newly applied a multivariate approach, a partial least squares regression, to explain NIRS signals with multivoxel information from fMRI within the brain and soft tissues in the head. We concurrently obtained fMRI and NIRS signals in 9 healthy human subjects engaging in an n-back task. The multivariate fMRI model was quite successfully able to predict the NIRS signals by cross-validation (interclass correlation coefficient = ∼0.85). This result confirmed that fMRI and NIRS surely measure the same hemoglobin concentration. Additional application of Monte-Carlo permutation tests confirmed that the model surely reflects temporal and spatial hemodynamic information, not random noise. After this thorough validation, we calculated the ratios of the contributions of the brain and soft-tissue hemodynamics to the NIRS signals, and found that the contribution ratios were quite different across different NIRS channels in reality, presumably because of the structural complexity of the frontotemporal regions. Hum Brain Mapp 38:5274-5291, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Capture of microRNA-bound mRNAs identifies the tumor suppressor miR-34a as a regulator of growth factor signaling.

    PubMed

    Lal, Ashish; Thomas, Marshall P; Altschuler, Gabriel; Navarro, Francisco; O'Day, Elizabeth; Li, Xiao Ling; Concepcion, Carla; Han, Yoon-Chi; Thiery, Jerome; Rajani, Danielle K; Deutsch, Aaron; Hofmann, Oliver; Ventura, Andrea; Hide, Winston; Lieberman, Judy

    2011-11-01

    A simple biochemical method to isolate mRNAs pulled down with a transfected, biotinylated microRNA was used to identify direct target genes of miR-34a, a tumor suppressor gene. The method reidentified most of the known miR-34a regulated genes expressed in K562 and HCT116 cancer cell lines. Transcripts for 982 genes were enriched in the pull-down with miR-34a in both cell lines. Despite this large number, validation experiments suggested that ~90% of the genes identified in both cell lines can be directly regulated by miR-34a. Thus miR-34a is capable of regulating hundreds of genes. The transcripts pulled down with miR-34a were highly enriched for their roles in growth factor signaling and cell cycle progression. These genes form a dense network of interacting gene products that regulate multiple signal transduction pathways that orchestrate the proliferative response to external growth stimuli. Multiple candidate miR-34a-regulated genes participate in RAS-RAF-MAPK signaling. Ectopic miR-34a expression reduced basal ERK and AKT phosphorylation and enhanced sensitivity to serum growth factor withdrawal, while cells genetically deficient in miR-34a were less sensitive. Fourteen new direct targets of miR-34a were experimentally validated, including genes that participate in growth factor signaling (ARAF and PIK3R2) as well as genes that regulate cell cycle progression at various phases of the cell cycle (cyclins D3 and G2, MCM2 and MCM5, PLK1 and SMAD4). Thus miR-34a tempers the proliferative and pro-survival effect of growth factor stimulation by interfering with growth factor signal transduction and downstream pathways required for cell division.

  6. Local Wavelet-Based Filtering of Electromyographic Signals to Eliminate the Electrocardiographic-Induced Artifacts in Patients with Spinal Cord Injury

    PubMed Central

    Nitzken, Matthew; Bajaj, Nihit; Aslan, Sevda; Gimel’farb, Georgy; Ovechkin, Alexander

    2013-01-01

    Surface Electromyography (EMG) is a standard method used in clinical practice and research to assess motor function in order to help with the diagnosis of neuromuscular pathology in human and animal models. EMG recorded from trunk muscles involved in the activity of breathing can be used as a direct measure of respiratory motor function in patients with spinal cord injury (SCI) or other disorders associated with motor control deficits. However, EMG potentials recorded from these muscles are often contaminated with heart-induced electrocardiographic (ECG) signals. Elimination of these artifacts plays a critical role in the precise measure of the respiratory muscle electrical activity. This study was undertaken to find an optimal approach to eliminate the ECG artifacts from EMG recordings. Conventional global filtering can be used to decrease the ECG-induced artifact. However, this method can alter the EMG signal and changes physiologically relevant information. We hypothesize that, unlike global filtering, localized removal of ECG artifacts will not change the original EMG signals. We develop an approach to remove the ECG artifacts without altering the amplitude and frequency components of the EMG signal by using an externally recorded ECG signal as a mask to locate areas of the ECG spikes within EMG data. These segments containing ECG spikes were decomposed into 128 sub-wavelets by a custom-scaled Morlet Wavelet Transform. The ECG-related sub-wavelets at the ECG spike location were removed and a de-noised EMG signal was reconstructed. Validity of the proposed method was proven using mathematical simulated synthetic signals and EMG obtained from SCI patients. We compare the Root-mean Square Error and the Relative Change in Variance between this method, global, notch and adaptive filters. The results show that the localized wavelet-based filtering has the benefit of not introducing error in the native EMG signal and accurately removing ECG artifacts from EMG signals. PMID:24307920

  7. Local Wavelet-Based Filtering of Electromyographic Signals to Eliminate the Electrocardiographic-Induced Artifacts in Patients with Spinal Cord Injury.

    PubMed

    Nitzken, Matthew; Bajaj, Nihit; Aslan, Sevda; Gimel'farb, Georgy; El-Baz, Ayman; Ovechkin, Alexander

    2013-07-18

    Surface Electromyography (EMG) is a standard method used in clinical practice and research to assess motor function in order to help with the diagnosis of neuromuscular pathology in human and animal models. EMG recorded from trunk muscles involved in the activity of breathing can be used as a direct measure of respiratory motor function in patients with spinal cord injury (SCI) or other disorders associated with motor control deficits. However, EMG potentials recorded from these muscles are often contaminated with heart-induced electrocardiographic (ECG) signals. Elimination of these artifacts plays a critical role in the precise measure of the respiratory muscle electrical activity. This study was undertaken to find an optimal approach to eliminate the ECG artifacts from EMG recordings. Conventional global filtering can be used to decrease the ECG-induced artifact. However, this method can alter the EMG signal and changes physiologically relevant information. We hypothesize that, unlike global filtering, localized removal of ECG artifacts will not change the original EMG signals. We develop an approach to remove the ECG artifacts without altering the amplitude and frequency components of the EMG signal by using an externally recorded ECG signal as a mask to locate areas of the ECG spikes within EMG data. These segments containing ECG spikes were decomposed into 128 sub-wavelets by a custom-scaled Morlet Wavelet Transform. The ECG-related sub-wavelets at the ECG spike location were removed and a de-noised EMG signal was reconstructed. Validity of the proposed method was proven using mathematical simulated synthetic signals and EMG obtained from SCI patients. We compare the Root-mean Square Error and the Relative Change in Variance between this method, global, notch and adaptive filters. The results show that the localized wavelet-based filtering has the benefit of not introducing error in the native EMG signal and accurately removing ECG artifacts from EMG signals.

  8. Evaluation of Secretion Prediction Highlights Differing Approaches Needed for Oomycete and Fungal Effectors.

    PubMed

    Sperschneider, Jana; Williams, Angela H; Hane, James K; Singh, Karam B; Taylor, Jennifer M

    2015-01-01

    The steadily increasing number of sequenced fungal and oomycete genomes has enabled detailed studies of how these eukaryotic microbes infect plants and cause devastating losses in food crops. During infection, fungal and oomycete pathogens secrete effector molecules which manipulate host plant cell processes to the pathogen's advantage. Proteinaceous effectors are synthesized intracellularly and must be externalized to interact with host cells. Computational prediction of secreted proteins from genomic sequences is an important technique to narrow down the candidate effector repertoire for subsequent experimental validation. In this study, we benchmark secretion prediction tools on experimentally validated fungal and oomycete effectors. We observe that for a set of fungal SwissProt protein sequences, SignalP 4 and the neural network predictors of SignalP 3 (D-score) and SignalP 2 perform best. For effector prediction in particular, the use of a sensitive method can be desirable to obtain the most complete candidate effector set. We show that the neural network predictors of SignalP 2 and 3, as well as TargetP were the most sensitive tools for fungal effector secretion prediction, whereas the hidden Markov model predictors of SignalP 2 and 3 were the most sensitive tools for oomycete effectors. Thus, previous versions of SignalP retain value for oomycete effector prediction, as the current version, SignalP 4, was unable to reliably predict the signal peptide of the oomycete Crinkler effectors in the test set. Our assessment of subcellular localization predictors shows that cytoplasmic effectors are often predicted as not extracellular. This limits the reliability of secretion predictions that depend on these tools. We present our assessment with a view to informing future pathogenomics studies and suggest revised pipelines for secretion prediction to obtain optimal effector predictions in fungi and oomycetes.

  9. Infrasound array criteria for automatic detection and front velocity estimation of snow avalanches: towards a real-time early-warning system

    NASA Astrophysics Data System (ADS)

    Marchetti, E.; Ripepe, M.; Ulivieri, G.; Kogelnig, A.

    2015-11-01

    Avalanche risk management is strongly related to the ability to identify and timely report the occurrence of snow avalanches. Infrasound has been applied to avalanche research and monitoring for the last 20 years but it never turned into an operational tool to identify clear signals related to avalanches. We present here a method based on the analysis of infrasound signals recorded by a small aperture array in Ischgl (Austria), which provides a significant improvement to overcome this limit. The method is based on array-derived wave parameters, such as back azimuth and apparent velocity. The method defines threshold criteria for automatic avalanche identification by considering avalanches as a moving source of infrasound. We validate the efficiency of the automatic infrasound detection with continuous observations with Doppler radar and we show how the velocity of a snow avalanche in any given path around the array can be efficiently derived. Our results indicate that a proper infrasound array analysis allows a robust, real-time, remote detection of snow avalanches that is able to provide the number and the time of occurrence of snow avalanches occurring all around the array, which represent key information for a proper validation of avalanche forecast models and risk management in a given area.

  10. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Optimization and Validation of Rotating Current Excitation with GMR Array Sensors for Riveted

    DTIC Science & Technology

    2016-09-16

    distribution. Simulation results, using both an optimized coil and a conventional coil, are generated using the finite element method (FEM) model...optimized coil and a conventional coil, are generated using the finite element method (FEM) model. The signal magnitude for an optimized coil is seen to be...optimized coil. 4. Model Based Performance Analysis A 3D finite element model (FEM) is used to analyze the performance of the optimized coil and

  12. Validations of calibration-free measurements of electron temperature using double-pass Thomson scattering diagnostics from theoretical and experimental aspects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tojo, H., E-mail: tojo.hiroshi@qst.go.jp; Hiratsuka, J.; Yatsuka, E.

    2016-09-15

    This paper evaluates the accuracy of electron temperature measurements and relative transmissivities of double-pass Thomson scattering diagnostics. The electron temperature (T{sub e}) is obtained from the ratio of signals from a double-pass scattering system, then relative transmissivities are calculated from the measured T{sub e} and intensity of the signals. How accurate the values are depends on the electron temperature (T{sub e}) and scattering angle (θ), and therefore the accuracy of the values was evaluated experimentally using the Large Helical Device (LHD) and the Tokyo spherical tokamak-2 (TST-2). Analyzing the data from the TST-2 indicates that a high T{sub e} andmore » a large scattering angle (θ) yield accurate values. Indeed, the errors for scattering angle θ = 135° are approximately half of those for θ = 115°. The method of determining the T{sub e} in a wide T{sub e} range spanning over two orders of magnitude (0.01–1.5 keV) was validated using the experimental results of the LHD and TST-2. A simple method to provide relative transmissivities, which include inputs from collection optics, vacuum window, optical fibers, and polychromators, is also presented. The relative errors were less than approximately 10%. Numerical simulations also indicate that the T{sub e} measurements are valid under harsh radiation conditions. This method to obtain T{sub e} can be considered for the design of Thomson scattering systems where there is high-performance plasma that generates harsh radiation environments.« less

  13. Full-wave reflection of lightning long-wave radio pulses from the ionospheric D region: Comparison with midday observations of broadband lightning signals

    NASA Astrophysics Data System (ADS)

    Jacobson, Abram R.; Shao, Xuan-Min; Holzworth, Robert

    2010-05-01

    We are developing and testing a steep-incidence D region sounding method for inferring profile information, principally regarding electron density. The method uses lightning emissions (in the band 5-500 kHz) as the probe signal. The data are interpreted by comparison against a newly developed single-reflection model of the radio wave's encounter with the lower ionosphere. The ultimate application of the method will be to study transient, localized disturbances of the nocturnal D region, including those instigated by lightning itself. Prior to applying the method to study lightning-induced perturbations of the nighttime D region, we have performed a validation test against more stable and predictable daytime observations, where the profile of electron density is largely determined by direct solar X-ray illumination. This article reports on the validation test. Predictions from our recently developed full-wave ionospheric-reflection model are compared to statistical summaries of daytime lightning radiated waveforms, recorded by the Los Alamos Sferic Array. The comparison is used to retrieve best fit parameters for an exponential profile of electron density in the ionospheric D region. The optimum parameter values are compared to those found elsewhere using a narrowband beacon technique, which used totally different measurements, ranges, and modeling approaches from those of the work reported here.

  14. Correlation between external and internal respiratory motion: a validation study.

    PubMed

    Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-05-01

    In motion-compensated image-guided radiotherapy, accurate tracking of the target region is required. This tracking process includes building a correlation model between external surrogate motion and the motion of the target region. A novel correlation method is presented and compared with the commonly used polynomial model. The CyberKnife system (Accuray, Inc., Sunnyvale/CA) uses a polynomial correlation model to relate externally measured surrogate data (optical fibres on the patient's chest emitting red light) to infrequently acquired internal measurements (X-ray data). A new correlation algorithm based on ɛ -Support Vector Regression (SVR) was developed. Validation and comparison testing were done with human volunteers using live 3D ultrasound and externally measured infrared light-emitting diodes (IR LEDs). Seven data sets (5:03-6:27 min long) were recorded from six volunteers. Polynomial correlation algorithms were compared to the SVR-based algorithm demonstrating an average increase in root mean square (RMS) accuracy of 21.3% (0.4 mm). For three signals, the increase was more than 29% and for one signal as much as 45.6% (corresponding to more than 1.5 mm RMS). Further analysis showed the improvement to be statistically significant. The new SVR-based correlation method outperforms traditional polynomial correlation methods for motion tracking. This method is suitable for clinical implementation and may improve the overall accuracy of targeted radiotherapy.

  15. Computational Depth of Anesthesia via Multiple Vital Signs Based on Artificial Neural Networks.

    PubMed

    Sadrawi, Muammar; Fan, Shou-Zen; Abbod, Maysam F; Jen, Kuo-Kuang; Shieh, Jiann-Shing

    2015-01-01

    This study evaluated the depth of anesthesia (DoA) index using artificial neural networks (ANN) which is performed as the modeling technique. Totally 63-patient data is addressed, for both modeling and testing of 17 and 46 patients, respectively. The empirical mode decomposition (EMD) is utilized to purify between the electroencephalography (EEG) signal and the noise. The filtered EEG signal is subsequently extracted to achieve a sample entropy index by every 5-second signal. Then, it is combined with other mean values of vital signs, that is, electromyography (EMG), heart rate (HR), pulse, systolic blood pressure (SBP), diastolic blood pressure (DBP), and signal quality index (SQI) to evaluate the DoA index as the input. The 5 doctor scores are averaged to obtain an output index. The mean absolute error (MAE) is utilized as the performance evaluation. 10-fold cross-validation is performed in order to generalize the model. The ANN model is compared with the bispectral index (BIS). The results show that the ANN is able to produce lower MAE than BIS. For the correlation coefficient, ANN also has higher value than BIS tested on the 46-patient testing data. Sensitivity analysis and cross-validation method are applied in advance. The results state that EMG has the most effecting parameter, significantly.

  16. Computational Depth of Anesthesia via Multiple Vital Signs Based on Artificial Neural Networks

    PubMed Central

    Sadrawi, Muammar; Fan, Shou-Zen; Abbod, Maysam F.; Jen, Kuo-Kuang; Shieh, Jiann-Shing

    2015-01-01

    This study evaluated the depth of anesthesia (DoA) index using artificial neural networks (ANN) which is performed as the modeling technique. Totally 63-patient data is addressed, for both modeling and testing of 17 and 46 patients, respectively. The empirical mode decomposition (EMD) is utilized to purify between the electroencephalography (EEG) signal and the noise. The filtered EEG signal is subsequently extracted to achieve a sample entropy index by every 5-second signal. Then, it is combined with other mean values of vital signs, that is, electromyography (EMG), heart rate (HR), pulse, systolic blood pressure (SBP), diastolic blood pressure (DBP), and signal quality index (SQI) to evaluate the DoA index as the input. The 5 doctor scores are averaged to obtain an output index. The mean absolute error (MAE) is utilized as the performance evaluation. 10-fold cross-validation is performed in order to generalize the model. The ANN model is compared with the bispectral index (BIS). The results show that the ANN is able to produce lower MAE than BIS. For the correlation coefficient, ANN also has higher value than BIS tested on the 46-patient testing data. Sensitivity analysis and cross-validation method are applied in advance. The results state that EMG has the most effecting parameter, significantly. PMID:26568957

  17. Property-Based Monitoring of Analog and Mixed-Signal Systems

    NASA Astrophysics Data System (ADS)

    Havlicek, John; Little, Scott; Maler, Oded; Nickovic, Dejan

    In the recent past, there has been a steady growth of the market for consumer embedded devices such as cell phones, GPS and portable multimedia systems. In embedded systems, digital, analog and software components are combined on a single chip, resulting in increasingly complex designs that introduce richer functionality on smaller devices. As a consequence, the potential insertion of errors into a design becomes higher, yielding an increasing need for automated analog and mixed-signal validation tools. In the purely digital setting, formal verification based on properties expressed in industrial specification languages such as PSL and SVA is nowadays successfully integrated in the design flow. On the other hand, the validation of analog and mixed-signal systems still largely depends on simulation-based, ad-hoc methods. In this tutorial, we consider some ingredients of the standard verification methodology that can be successfully exported from digital to analog and mixed-signal setting, in particular property-based monitoring techniques. Property-based monitoring is a lighter approach to the formal verification, where the system is seen as a "black-box" that generates sets of traces, whose correctness is checked against a property, that is its high-level specification. Although incomplete, monitoring is effectively used to catch faults in systems, without guaranteeing their full correctness.

  18. A simplified guide for charged aerosol detection of non-chromophoric compounds-Analytical method development and validation for the HPLC assay of aerosol particle size distribution for amikacin.

    PubMed

    Soliven, Arianne; Haidar Ahmad, Imad A; Tam, James; Kadrichu, Nani; Challoner, Pete; Markovich, Robert; Blasko, Andrei

    2017-09-05

    Amikacin, an aminoglycoside antibiotic lacking a UV chromophore, was developed into a drug product for delivery by inhalation. A robust method for amikacin assay analysis and aerosol particle size distribution (aPSD) determination, with comparable performance to the conventional UV detector was developed using a charged aerosol detector (CAD). The CAD approach involved more parameters for optimization than UV detection due to its sensitivity to trace impurities, non-linear response and narrow dynamic range of signal versus concentration. Through careful selection of the power transformation function value and evaporation temperature, a wider linear dynamic range, improved signal-to-noise ratio and high repeatability were obtained. The influences of mobile phase grade and glassware binding of amikacin during sample preparation were addressed. A weighed (1/X 2 ) least square regression was used for the calibration curve. The limit of quantitation (LOQ) and limit of detection (LOD) for this method were determined to be 5μg/mL and 2μg/mL, respectively. The method was validated over a concentration range of 0.05-2mg/mL. The correlation coefficient for the peak area versus concentration was 1.00 and the y-intercept was 0.2%. The recovery accuracies of triplicate preparations at 0.05, 1.0, and 2.0mg/mL were in the range of 100-101%. The relative standard deviation (S rel ) of six replicates at 1.0mg/mL was 1%, and S rel of five injections at the limit of quantitation was 4%. A robust HPLC-CAD method was developed and validated for the determination of the aPSD for amikacin. The CAD method development produced a simplified procedure with minimal variability in results during: routine operation, transfer from one instrument to another, and between different analysts. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A method for the measurement of dispersion curves of circumferential guided waves radiating from curved shells: experimental validation and application to a femoral neck mimicking phantom

    NASA Astrophysics Data System (ADS)

    Nauleau, Pierre; Minonzio, Jean-Gabriel; Chekroun, Mathieu; Cassereau, Didier; Laugier, Pascal; Prada, Claire; Grimal, Quentin

    2016-07-01

    Our long-term goal is to develop an ultrasonic method to characterize the thickness, stiffness and porosity of the cortical shell of the femoral neck, which could enhance hip fracture risk prediction. To this purpose, we proposed to adapt a technique based on the measurement of guided waves. We previously evidenced the feasibility of measuring circumferential guided waves in a bone-mimicking phantom of a circular cross-section of even thickness. The goal of this study is to investigate the impact of the complex geometry of the femoral neck on the measurement of guided waves. Two phantoms of an elliptical cross-section and one phantom of a realistic cross-section were investigated. A 128-element array was used to record the inter-element response matrix of these waveguides. This experiment was simulated using a custom-made hybrid code. The response matrices were analyzed using a technique based on the physics of wave propagation. This method yields portions of dispersion curves of the waveguides which were compared to reference dispersion curves. For the elliptical phantoms, three portions of dispersion curves were determined with a good agreement between experiment, simulation and theory. The method was thus validated. The characteristic dimensions of the shell were found to influence the identification of the circumferential wave signals. The method was then applied to the signals backscattered by the superior half of constant thickness of the realistic phantom. A cut-off frequency and some portions of modes were measured, with a good agreement with the theoretical curves of a plate waveguide. We also observed that the method cannot be applied directly to the signals backscattered by the lower half of varying thicknesses of the phantom. The proposed approach could then be considered to evaluate the properties of the superior part of the femoral neck, which is known to be a clinically relevant site.

  20. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    PubMed

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  1. Baseline-free damage detection in composite plates based on the reciprocity principle

    NASA Astrophysics Data System (ADS)

    Huang, Liping; Zeng, Liang; Lin, Jing

    2018-01-01

    Lamb wave based damage detection techniques have been widely used in composite structures. In particular, these techniques usually rely on reference signals, which are significantly influenced by the operational and environmental conditions. To solve this issue, this paper presents a baseline-free damage inspection method based on the reciprocity principle. If a localized nonlinear scatterer exists along the wave path, the reciprocity breaks down. Through estimating the loss of reciprocity, the delamination could be detected. A reciprocity index (RI), which compares the discrepancy between the signal received in transducer B when emitting from transducer A and the signal received in A when the same source is located in B, is established to quantitatively analyze the reciprocity. Experimental results show that the RI value of a damaged path is much higher than that of a healthy path. In addition, the effects of the parameters of excitation signal (i.e., central frequency and bandwidth) and the position of delamination on the RI value are discussed. Furthermore, a RI based probabilistic imaging algorithm is proposed for detecting delamination damage of composite plates without reference signals. Finally, the effectiveness of this baseline-free damage detection method is validated by an experimental example.

  2. Phase retrieval via incremental truncated amplitude flow algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  3. A fault diagnosis scheme for rolling bearing based on local mean decomposition and improved multiscale fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu

    2016-01-01

    This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.

  4. Adaptive identification and control of structural dynamics systems using recursive lattice filters

    NASA Technical Reports Server (NTRS)

    Sundararajan, N.; Montgomery, R. C.; Williams, J. P.

    1985-01-01

    A new approach for adaptive identification and control of structural dynamic systems by using least squares lattice filters thar are widely used in the signal processing area is presented. Testing procedures for interfacing the lattice filter identification methods and modal control method for stable closed loop adaptive control are presented. The methods are illustrated for a free-free beam and for a complex flexible grid, with the basic control objective being vibration suppression. The approach is validated by using both simulations and experimental facilities available at the Langley Research Center.

  5. Robust Smoothing: Smoothing Parameter Selection and Applications to Fluorescence Spectroscopy∂

    PubMed Central

    Lee, Jong Soo; Cox, Dennis D.

    2009-01-01

    Fluorescence spectroscopy has emerged in recent years as an effective way to detect cervical cancer. Investigation of the data preprocessing stage uncovered a need for a robust smoothing to extract the signal from the noise. Various robust smoothing methods for estimating fluorescence emission spectra are compared and data driven methods for the selection of smoothing parameter are suggested. The methods currently implemented in R for smoothing parameter selection proved to be unsatisfactory, and a computationally efficient procedure that approximates robust leave-one-out cross validation is presented. PMID:20729976

  6. Detection of Partial Discharge Sources Using UHF Sensors and Blind Signal Separation

    PubMed Central

    Boya, Carlos; Parrado-Hernández, Emilio

    2017-01-01

    The measurement of the emitted electromagnetic energy in the UHF region of the spectrum allows the detection of partial discharges and, thus, the on-line monitoring of the condition of the insulation of electrical equipment. Unfortunately, determining the affected asset is difficult when there are several simultaneous insulation defects. This paper proposes the use of an independent component analysis (ICA) algorithm to separate the signals coming from different partial discharge (PD) sources. The performance of the algorithm has been tested using UHF signals generated by test objects. The results are validated by two automatic classification techniques: support vector machines and similarity with class mean. Both methods corroborate the suitability of the algorithm to separate the signals emitted by each PD source even when they are generated by the same type of insulation defect. PMID:29140267

  7. Laser Metrology Heterodyne Phase-Locked Loop

    NASA Technical Reports Server (NTRS)

    Loya, Frank; Halverson, Peter

    2009-01-01

    A method reduces sensitivity to noise in a signal from a laser heterodyne interferometer. The phase-locked loop (PLL) removes glitches that occur in a zero-crossing detector s output [that can happen if the signal-to-noise ratio (SNR) of the heterodyne signal is low] by the use of an internal oscillator that produces a square-wave signal at a frequency that is inherently close to the heterodyne frequency. It also contains phase-locking circuits that lock the phase of the oscillator to the output of the zero-crossing detector. Because the PLL output is an oscillator signal, it is glitch-free. This enables the ability to make accurate phase measurements in spite of low SNR, creates an immunity to phase error caused by shifts in the heterodyne frequency (i.e. if the target moves causing Doppler shift), and maintains a valid phase even when the signal drops out for brief periods of time, such as when the laser is blocked by a stray object.

  8. Development and validation of an HPLC–MS/MS method to determine clopidogrel in human plasma

    PubMed Central

    Liu, Gangyi; Dong, Chunxia; Shen, Weiwei; Lu, Xiaopei; Zhang, Mengqi; Gui, Yuzhou; Zhou, Qinyi; Yu, Chen

    2015-01-01

    A quantitative method for clopidogrel using online-SPE tandem LC–MS/MS was developed and fully validated according to the well-established FDA guidelines. The method achieves adequate sensitivity for pharmacokinetic studies, with lower limit of quantifications (LLOQs) as low as 10 pg/mL. Chromatographic separations were performed on reversed phase columns Kromasil Eternity-2.5-C18-UHPLC for both methods. Positive electrospray ionization in multiple reaction monitoring (MRM) mode was employed for signal detection and a deuterated analogue (clopidogrel-d4) was used as internal standard (IS). Adjustments in sample preparation, including introduction of an online-SPE system proved to be the most effective method to solve the analyte back-conversion in clinical samples. Pooled clinical samples (two levels) were prepared and successfully used as real-sample quality control (QC) in the validation of back-conversion testing under different conditions. The result showed that the real samples were stable in room temperature for 24 h. Linearity, precision, extraction recovery, matrix effect on spiked QC samples and stability tests on both spiked QCs and real sample QCs stored in different conditions met the acceptance criteria. This online-SPE method was successfully applied to a bioequivalence study of 75 mg single dose clopidogrel tablets in 48 healthy male subjects. PMID:26904399

  9. Application of artificial neural network to fMRI regression analysis.

    PubMed

    Misaki, Masaya; Miyauchi, Satoru

    2006-01-15

    We used an artificial neural network (ANN) to detect correlations between event sequences and fMRI (functional magnetic resonance imaging) signals. The layered feed-forward neural network, given a series of events as inputs and the fMRI signal as a supervised signal, performed a non-linear regression analysis. This type of ANN is capable of approximating any continuous function, and thus this analysis method can detect any fMRI signals that correlated with corresponding events. Because of the flexible nature of ANNs, fitting to autocorrelation noise is a problem in fMRI analyses. We avoided this problem by using cross-validation and an early stopping procedure. The results showed that the ANN could detect various responses with different time courses. The simulation analysis also indicated an additional advantage of ANN over non-parametric methods in detecting parametrically modulated responses, i.e., it can detect various types of parametric modulations without a priori assumptions. The ANN regression analysis is therefore beneficial for exploratory fMRI analyses in detecting continuous changes in responses modulated by changes in input values.

  10. Modeling of acoustic emission signal propagation in waveguides.

    PubMed

    Zelenyak, Andreea-Manuela; Hamstad, Marvin A; Sause, Markus G R

    2015-05-21

    Acoustic emission (AE) testing is a widely used nondestructive testing (NDT) method to investigate material failure. When environmental conditions are harmful for the operation of the sensors, waveguides are typically mounted in between the inspected structure and the sensor. Such waveguides can be built from different materials or have different designs in accordance with the experimental needs. All these variations can cause changes in the acoustic emission signals in terms of modal conversion, additional attenuation or shift in frequency content. A finite element method (FEM) was used to model acoustic emission signal propagation in an aluminum plate with an attached waveguide and was validated against experimental data. The geometry of the waveguide is systematically changed by varying the radius and height to investigate the influence on the detected signals. Different waveguide materials were implemented and change of material properties as function of temperature were taken into account. Development of the option of modeling different waveguide options replaces the time consuming and expensive trial and error alternative of experiments. Thus, the aim of this research has important implications for those who use waveguides for AE testing.

  11. Wave packet interferometry and quantum state reconstruction by acousto-optic phase modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tekavec, Patrick F.; Dyke, Thomas R.; Marcus, Andrew H.

    2006-11-21

    Studies of wave packet dynamics often involve phase-selective measurements of coherent optical signals generated from sequences of ultrashort laser pulses. In wave packet interferometry (WPI), the separation between the temporal envelopes of the pulses must be precisely monitored or maintained. Here we introduce a new (and easy to implement) experimental scheme for phase-selective measurements that combines acousto-optic phase modulation with ultrashort laser excitation to produce an intensity-modulated fluorescence signal. Synchronous detection, with respect to an appropriately constructed reference, allows the signal to be simultaneously measured at two phases differing by 90 deg. Our method effectively decouples the relative temporal phasemore » from the pulse envelopes of a collinear train of optical pulse pairs. We thus achieve a robust and high signal-to-noise scheme for WPI applications, such as quantum state reconstruction and electronic spectroscopy. The validity of the method is demonstrated, and state reconstruction is performed, on a model quantum system - atomic Rb vapor. Moreover, we show that our measurements recover the correct separation between the absorptive and dispersive contributions to the system susceptibility.« less

  12. Signaling pathway cloud regulation for in silico screening and ranking of the potential geroprotective drugs

    PubMed Central

    Zhavoronkov, Alex; Buzdin, Anton A.; Garazha, Andrey V.; Borisov, Nikolay M.; Moskalev, Alexey A.

    2014-01-01

    The major challenges of aging research include absence of the comprehensive set of aging biomarkers, the time it takes to evaluate the effects of various interventions on longevity in humans and the difficulty extrapolating the results from model organisms to humans. To address these challenges we propose the in silico method for screening and ranking the possible geroprotectors followed by the high-throughput in vivo and in vitro validation. The proposed method evaluates the changes in the collection of activated or suppressed signaling pathways involved in aging and longevity, termed signaling pathway cloud, constructed using the gene expression data and epigenetic profiles of young and old patients' tissues. The possible interventions are selected and rated according to their ability to regulate age-related changes and minimize differences in the signaling pathway cloud. While many algorithmic solutions to simulating the induction of the old into young metabolic profiles in silico are possible, this flexible and scalable approach may potentially be used to predict the efficacy of the many drugs that may extend human longevity before conducting pre-clinical work and expensive clinical trials. PMID:24624136

  13. Kurtosis Approach for Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.

  14. Low-power system for the acquisition of the respiratory signal of neonates using diaphragmatic electromyography

    PubMed Central

    Torres, Róbinson; López-Isaza, Sergio; Mejía-Mejía, Elisa; Paniagua, Viviana; González, Víctor

    2017-01-01

    Introduction An apnea episode is defined as the cessation of breathing for ≥15 seconds or as any suspension of breathing accompanied by hypoxia and bradycardia. Obtaining information about the respiratory system in a neonate can be accomplished using electromyography signals from the diaphragm muscle. Objective The purpose of this paper is to illustrate a method by which the respiratory and electrocardiographic signals from neonates can be obtained using diaphragmatic electromyography. Materials and methods The system was developed using single-supply, micropower components, which deliver a low-power consumption system appropriate for the development of portable devices. The stages of the system were tested in both adult and neonate patients. Results The system delivers signals as those expected in both patients and allows the acquisition of respiratory signals directly from the diaphragmatic electromyography. Conclusion This low-power system may present a good alternative for monitoring the cardiac and respiratory activity in newborn babies, both in the hospital and at home. Significance The system delivers good signals but needs to be validated for its use in neonates. It is being used in the Neonatal Intensive Care Unit of the Hospital General de Medellín Luz Castro de Gutiérrez. PMID:28260954

  15. Superresolution fluorescence imaging by pump-probe setup using repetitive stimulated transition process

    NASA Astrophysics Data System (ADS)

    Dake, Fumihiro; Fukutake, Naoki; Hayashi, Seri; Taki, Yusuke

    2018-02-01

    We proposed superresolution nonlinear fluorescence microscopy with pump-probe setup that utilizes repetitive stimulated absorption and stimulated emission caused by two-color laser beams. The resulting nonlinear fluorescence that undergoes such a repetitive stimulated transition is detectable as a signal via the lock-in technique. As the nonlinear fluorescence signal is produced by the multi-ply combination of incident beams, the optical resolution can be improved. A theoretical model of the nonlinear optical process is provided using rate equations, which offers phenomenological interpretation of nonlinear fluorescence and estimation of the signal properties. The proposed method is demonstrated as having the scalability of optical resolution. Theoretical resolution and bead image are also estimated to validate the experimental result.

  16. Novel, continuous monitoring of fine‐scale movement using fixed‐position radiotelemetry arrays and random forest location fingerprinting

    USGS Publications Warehouse

    Harbicht, Andrew B.; Castro-Santos, Theodore R.; Ardren, William R.; Gorsky, Dimitry; Fraser, Dylan

    2017-01-01

    Radio‐tag signals from fixed‐position antennas are most often used to indicate presence or absence of individuals, or to estimate individual activity levels from signal strength variation within an antenna's detection zone. The potential of such systems to provide more precise information on tag location and movement has not been explored in great detail in an ecological setting.By reversing the roles that transmitters and receivers play in localization methods common to the telecommunications industry, we present a new telemetric tool for accurately estimating the location of tagged individuals from received signal strength values. The methods used to characterize the study area in terms of received signal strength are described, as is the random forest model used for localization. The resulting method is then validated using test data before being applied to true data collected from tagged individuals in the study site.Application of the localization method to test data withheld from the learning dataset indicated a low average error over the entire study area (<1 m), whereas application of the localization method to real data produced highly probable results consistent with field observations.This telemetric approach provided detailed movement data for tagged fish along a single axis (a migratory path) and is particularly useful for monitoring passage along migratory routes. The new methods applied in this study can also be expanded to include multiple axes (x, y, z) and multiple environments (aquatic and terrestrial) for remotely monitoring wildlife movement.

  17. SSVEP recognition using common feature analysis in brain-computer interface.

    PubMed

    Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2015-04-15

    Canonical correlation analysis (CCA) has been successfully applied to steady-state visual evoked potential (SSVEP) recognition for brain-computer interface (BCI) application. Although the CCA method outperforms the traditional power spectral density analysis through multi-channel detection, it requires additionally pre-constructed reference signals of sine-cosine waves. It is likely to encounter overfitting in using a short time window since the reference signals include no features from training data. We consider that a group of electroencephalogram (EEG) data trials recorded at a certain stimulus frequency on a same subject should share some common features that may bear the real SSVEP characteristics. This study therefore proposes a common feature analysis (CFA)-based method to exploit the latent common features as natural reference signals in using correlation analysis for SSVEP recognition. Good performance of the CFA method for SSVEP recognition is validated with EEG data recorded from ten healthy subjects, in contrast to CCA and a multiway extension of CCA (MCCA). Experimental results indicate that the CFA method significantly outperformed the CCA and the MCCA methods for SSVEP recognition in using a short time window (i.e., less than 1s). The superiority of the proposed CFA method suggests it is promising for the development of a real-time SSVEP-based BCI. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Model-based Bayesian signal extraction algorithm for peripheral nerves

    NASA Astrophysics Data System (ADS)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.

  19. FPGA Implementation of Heart Rate Monitoring System.

    PubMed

    Panigrahy, D; Rakshit, M; Sahu, P K

    2016-03-01

    This paper describes a field programmable gate array (FPGA) implementation of a system that calculates the heart rate from Electrocardiogram (ECG) signal. After heart rate calculation, tachycardia, bradycardia or normal heart rate can easily be detected. ECG is a diagnosis tool routinely used to access the electrical activities and muscular function of the heart. Heart rate is calculated by detecting the R peaks from the ECG signal. To provide a portable and the continuous heart rate monitoring system for patients using ECG, needs a dedicated hardware. FPGA provides easy testability, allows faster implementation and verification option for implementing a new design. We have proposed a five-stage based methodology by using basic VHDL blocks like addition, multiplication and data conversion (real to the fixed point and vice-versa). Our proposed heart rate calculation (R-peak detection) method has been validated, using 48 first channel ECG records of the MIT-BIH arrhythmia database. It shows an accuracy of 99.84%, the sensitivity of 99.94% and the positive predictive value of 99.89%. Our proposed method outperforms other well-known methods in case of pathological ECG signals and successfully implemented in FPGA.

  20. Inferring imagined speech using EEG signals: a new approach using Riemannian manifold features

    NASA Astrophysics Data System (ADS)

    Nguyen, Chuong H.; Karavas, George K.; Artemiadis, Panagiotis

    2018-02-01

    Objective. In this paper, we investigate the suitability of imagined speech for brain-computer interface (BCI) applications. Approach. A novel method based on covariance matrix descriptors, which lie in Riemannian manifold, and the relevance vector machines classifier is proposed. The method is applied on electroencephalographic (EEG) signals and tested in multiple subjects. Main results. The method is shown to outperform other approaches in the field with respect to accuracy and robustness. The algorithm is validated on various categories of speech, such as imagined pronunciation of vowels, short words and long words. The classification accuracy of our methodology is in all cases significantly above chance level, reaching a maximum of 70% for cases where we classify three words and 95% for cases of two words. Significance. The results reveal certain aspects that may affect the success of speech imagery classification from EEG signals, such as sound, meaning and word complexity. This can potentially extend the capability of utilizing speech imagery in future BCI applications. The dataset of speech imagery collected from total 15 subjects is also published.

  1. Neural network and wavelet average framing percentage energy for atrial fibrillation classification.

    PubMed

    Daqrouq, K; Alkhateeb, A; Ajour, M N; Morfeq, A

    2014-03-01

    ECG signals are an important source of information in the diagnosis of atrial conduction pathology. Nevertheless, diagnosis by visual inspection is a difficult task. This work introduces a novel wavelet feature extraction method for atrial fibrillation derived from the average framing percentage energy (AFE) of terminal wavelet packet transform (WPT) sub signals. Probabilistic neural network (PNN) is used for classification. The presented method is shown to be a potentially effective discriminator in an automated diagnostic process. The ECG signals taken from the MIT-BIH database are used to classify different arrhythmias together with normal ECG. Several published methods were investigated for comparison. The best recognition rate selection was obtained for AFE. The classification performance achieved accuracy 97.92%. It was also suggested to analyze the presented system in an additive white Gaussian noise (AWGN) environment; 55.14% for 0dB and 92.53% for 5dB. It was concluded that the proposed approach of automating classification is worth pursuing with larger samples to validate and extend the present study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. New methods of data calibration for high power-aperture lidar.

    PubMed

    Guan, Sai; Yang, Guotao; Chang, Qihai; Cheng, Xuewu; Yang, Yong; Gong, Shaohua; Wang, Jihong

    2013-03-25

    For high power-aperture lidar sounding of wide atmospheric dynamic ranges, as in middle-upper atmospheric probing, photomultiplier tubes' (PMT) pulse pile-up effects and signal-induced noise (SIN) complicates the extraction of information from lidar return signal, especially from metal layers' fluorescence signal. Pursuit for sophisticated description of metal layers' characteristics at far range (80~130km) with one PMT of high quantum efficiency (QE) and good SNR, contradicts the requirements for signals of wide linear dynamic range (i.e. from approximate 10(2) to 10(8) counts/s). In this article, Substantial improvements on experimental simulation of Lidar signals affected by PMT are reported to evaluate the PMTs' distortions in our High Power-Aperture Sodium LIDAR system. A new method for pile-up calibration is proposed by taking into account PMT and High Speed Data Acquisition Card as an Integrated Black-Box, as well as a new experimental method for identifying and removing SIN from the raw Lidar signals. Contradiction between the limited linear dynamic range of raw signal (55~80km) and requirements for wider acceptable linearity has been effectively solved, without complicating the current lidar system. Validity of these methods was demonstrated by applying calibrated data to retrieve atmospheric parameters (i.e. atmospheric density, temperature and sodium absolutely number density), in comparison with measurements of TIMED satellite and atmosphere model. Good agreements are obtained between results derived from calibrated signal and reference measurements where differences of atmosphere density, temperature are less than 5% in the stratosphere and less than 10K from 30km to mesosphere, respectively. Additionally, approximate 30% changes are shown in sodium concentration at its peak value. By means of the proposed methods to revert the true signal independent of detectors, authors approach a new balance between maintaining the linearity of adequate signal (20-110km) and guaranteeing good SNR (i.e. 10(4):1 around 90km) without debasing QE, in one single detecting channel. For the first time, PMT in photon-counting mode is independently applied to subtract reliable information of atmospheric parameters with wide acceptable linearity over an altitude range from stratosphere up to lower thermosphere (20-110km).

  3. Quantitative determination and validation of octreotide acetate using 1 H-NMR spectroscopy with internal standard method.

    PubMed

    Yu, Chen; Zhang, Qian; Xu, Peng-Yao; Bai, Yin; Shen, Wen-Bin; Di, Bin; Su, Meng-Xiang

    2018-01-01

    Quantitative nuclear magnetic resonance (qNMR) is a well-established technique in quantitative analysis. We presented a validated 1 H-qNMR method for assay of octreotide acetate, a kind of cyclic octopeptide. Deuterium oxide was used to remove the undesired exchangeable peaks, which was referred to as proton exchange, in order to make the quantitative signals isolated in the crowded spectrum of the peptide and ensure precise quantitative analysis. Gemcitabine hydrochloride was chosen as the suitable internal standard. Experimental conditions, including relaxation delay time, the numbers of scans, and pulse angle, were optimized first. Then method validation was carried out in terms of selectivity, stability, linearity, precision, and robustness. The assay result was compared with that by means of high performance liquid chromatography, which is provided by Chinese Pharmacopoeia. The statistical F test, Student's t test, and nonparametric test at 95% confidence level indicate that there was no significant difference between these two methods. qNMR is a simple and accurate quantitative tool with no need for specific corresponding reference standards. It has the potential of the quantitative analysis of other peptide drugs and standardization of the corresponding reference standards. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line.

    PubMed

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-09-16

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF₂) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach.

  5. Extraction of the respiratory signal from small-animal CT projections for a retrospective gating method

    NASA Astrophysics Data System (ADS)

    Chavarrías, C.; Vaquero, J. J.; Sisniega, A.; Rodríguez-Ruano, A.; Soto-Montenegro, M. L.; García-Barreno, P.; Desco, M.

    2008-09-01

    We propose a retrospective respiratory gating algorithm to generate dynamic CT studies. To this end, we compared three different methods of extracting the respiratory signal from the projections of small-animal cone-beam computed tomography (CBCT) scanners. Given a set of frames acquired from a certain axial angle, subtraction of their average image from each individual frame produces a set of difference images. Pixels in these images have positive or negative values (according to the respiratory phase) in those areas where there is lung movement. The respiratory signals were extracted by analysing the shape of the histogram of these difference images: we calculated the first four central and non-central moments. However, only odd-order moments produced the desired breathing signal, as the even-order moments lacked information about the phase. Each of these curves was compared to a reference signal recorded by means of a pneumatic pillow. Given the similar correlation coefficients yielded by all of them, we selected the mean to implement our retrospective protocol. Respiratory phase bins were separated, reconstructed independently and included in a dynamic sequence, suitable for cine playback. We validated our method in five adult rat studies by comparing profiles drawn across the diaphragm dome, with and without retrospective respiratory gating. Results showed a sharper transition in the gated reconstruction, with an average slope improvement of 60.7%.

  6. Short-Circuit Fault Detection and Classification Using Empirical Wavelet Transform and Local Energy for Electric Transmission Line

    PubMed Central

    Huang, Nantian; Qi, Jiajin; Li, Fuqing; Yang, Dongfeng; Cai, Guowei; Huang, Guilin; Zheng, Jian; Li, Zhenxin

    2017-01-01

    In order to improve the classification accuracy of recognizing short-circuit faults in electric transmission lines, a novel detection and diagnosis method based on empirical wavelet transform (EWT) and local energy (LE) is proposed. First, EWT is used to deal with the original short-circuit fault signals from photoelectric voltage transformers, before the amplitude modulated-frequency modulated (AM-FM) mode with a compactly supported Fourier spectrum is extracted. Subsequently, the fault occurrence time is detected according to the modulus maxima of intrinsic mode function (IMF2) from three-phase voltage signals processed by EWT. After this process, the feature vectors are constructed by calculating the LE of the fundamental frequency based on the three-phase voltage signals of one period after the fault occurred. Finally, the classifier based on support vector machine (SVM) which was constructed with the LE feature vectors is used to classify 10 types of short-circuit fault signals. Compared with complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and improved CEEMDAN methods, the new method using EWT has a better ability to present the frequency in time. The difference in the characteristics of the energy distribution in the time domain between different types of short-circuit faults can be presented by the feature vectors of LE. Together, simulation and real signals experiment demonstrate the validity and effectiveness of the new approach. PMID:28926953

  7. SNR enhancement for downhole microseismic data based on scale classification shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Juan; Ji, Shuo; Li, Yue; Qian, Zhihong; Lu, Weili

    2018-06-01

    Shearlet transform (ST) can be effective in 2D signal processing, due to its parabolic scaling, high directional sensitivity, and optimal sparsity. ST combined with thresholding has been successfully applied to suppress random noise. However, because of the low magnitude and high frequency of a downhole microseismic signal, the coefficient values of valid signals and noise are similar in the shearlet domain. As a result, it is difficult to use for denoising. In this paper, we present a scale classification ST to solve this problem. The ST is used to decompose noisy microseismic data into serval scales. By analyzing the spectrum and energy distribution of the shearlet coefficients of microseismic data, we divide the scales into two types: low-frequency scales which contain less useful signal and high-frequency scales which contain more useful signal. After classification, we use two different methods to deal with the coefficients on different scales. For the low-frequency scales, the noise is attenuated using a thresholding method. As for the high-frequency scales, we propose to use a generalized Gauss distribution model based a non-local means filter, which takes advantage of the temporal and spatial similarity of microseismic data. The experimental results on both synthetic records and field data illustrate that our proposed method preserves the useful components and attenuates the noise well.

  8. Deconvolution imaging of weak reflective pipe defects using guided-wave signals captured by a scanning receiver.

    PubMed

    Sun, Zeqing; Sun, Anyu; Ju, Bing-Feng

    2017-02-01

    Guided-wave echoes from weak reflective pipe defects are usually interfered by coherent noise and difficult to interpret. In this paper, a deconvolution imaging method is proposed to reconstruct defect images from synthetically focused guided-wave signals, with enhanced axial resolution. A compact transducer, circumferentially scanning around the pipe, is used to receive guided-wave echoes from discontinuities at a distance. This method achieves a higher circumferential sampling density than arrayed transducers-up to 72 sampling spots per lap for a pipe with a diameter of 180 mm. A noise suppression technique is used to enhance the signal-to-noise ratio. The enhancement in both signal-to-noise ratio and axial resolution of the method is experimentally validated by the detection of two kinds of artificial defects: a pitting defect of 5 mm in diameter and 0.9 mm in maximum depth, and iron pieces attached to the pipe surface. A reconstructed image of the pitting defect is obtained with a 5.87 dB signal-to-noise ratio. It is revealed that a high circumferential sampling density is important for the enhancement of the inspection sensitivity, by comparing the images reconstructed with different down-sampling ratios. A modified full width at half maximum is used as the criterion to evaluate the circumferential extent of the region where iron pieces are attached, which is applicable for defects with inhomogeneous reflection intensity.

  9. Identification of Load Categories in Rotor System Based on Vibration Analysis

    PubMed Central

    Yang, Zhaojian

    2017-01-01

    Rotating machinery is often subjected to variable loads during operation. Thus, monitoring and identifying different load types is important. Here, five typical load types have been qualitatively studied for a rotor system. A novel load category identification method for rotor system based on vibration signals is proposed. This method is a combination of ensemble empirical mode decomposition (EEMD), energy feature extraction, and back propagation (BP) neural network. A dedicated load identification test bench for rotor system was developed. According to loads characteristics and test conditions, an experimental plan was formulated, and loading tests for five loads were conducted. Corresponding vibration signals of the rotor system were collected for each load condition via eddy current displacement sensor. Signals were reconstructed using EEMD, and then features were extracted followed by energy calculations. Finally, characteristics were input to the BP neural network, to identify different load types. Comparison and analysis of identifying data and test data revealed a general identification rate of 94.54%, achieving high identification accuracy and good robustness. This shows that the proposed method is feasible. Due to reliable and experimentally validated theoretical results, this method can be applied to load identification and fault diagnosis for rotor equipment used in engineering applications. PMID:28726754

  10. Self-homodyne free-space optical communication system based on orthogonally polarized binary phase shift keying.

    PubMed

    Cai, Guangyu; Sun, Jianfeng; Li, Guangyuan; Zhang, Guo; Xu, Mengmeng; Zhang, Bo; Yue, Chaolei; Liu, Liren

    2016-06-10

    A self-homodyne laser communication system based on orthogonally polarized binary phase shift keying is demonstrated. The working principles of this method and the structure of a transceiver are described using theoretical calculations. Moreover, the signal-to-noise ratio, sensitivity, and bit error rate are analyzed for the amplifier-noise-limited case. The reported experiment validates the feasibility of the proposed method and demonstrates its advantageous sensitivity as a self-homodyne communication system.

  11. Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is.

  12. Sensor sentinel computing device

    DOEpatents

    Damico, Joseph P.

    2016-08-02

    Technologies pertaining to authenticating data output by sensors in an industrial environment are described herein. A sensor sentinel computing device receives time-series data from a sensor by way of a wireline connection. The sensor sentinel computing device generates a validation signal that is a function of the time-series signal. The sensor sentinel computing device then transmits the validation signal to a programmable logic controller in the industrial environment.

  13. Compression of Born ratio for fluorescence molecular tomography/x-ray computed tomography hybrid imaging: methodology and in vivo validation.

    PubMed

    Mohajerani, Pouyan; Ntziachristos, Vasilis

    2013-07-01

    The 360° rotation geometry of the hybrid fluorescence molecular tomography/x-ray computed tomography modality allows for acquisition of very large datasets, which pose numerical limitations on the reconstruction. We propose a compression method that takes advantage of the correlation of the Born-normalized signal among sources in spatially formed clusters to reduce the size of system model. The proposed method has been validated using an ex vivo study and an in vivo study of a nude mouse with a subcutaneous 4T1 tumor, with and without inclusion of a priori anatomical information. Compression rates of up to two orders of magnitude with minimum distortion of reconstruction have been demonstrated, resulting in large reduction in weight matrix size and reconstruction time.

  14. Inter-comparison of Methods for Extracting Subsurface Layers from SHARAD Radargrams over Martian polar regions

    NASA Astrophysics Data System (ADS)

    Xiong, S.; Muller, J.-P.; Carretero, R. C.

    2017-09-01

    Subsurface layers are preserved in the polar regions on Mars, representing a record of past climate changes on Mars. Orbital radar instruments, such as the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) onboard ESA Mars Express (MEX) and the SHAllow RADar (SHARAD) onboard the Mars Reconnaissance Orbiter (MRO), transmit radar signals to Mars and receive a set of return signals from these subsurface regions. Layering is a prominent subsurface feature, which has been revealed by both MARSIS and SHARAD radargrams over both polar regions on Mars. Automatic extraction of these subsurface layering is becoming increasingly important as there is now over ten years' of data archived. In this study, we investigate two different methods for extracting these subsurface layers from SHARAD data and compare the results against delineated layers derived manually to validate which methods is better for extracting these layers automatically.

  15. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  16. Evaluation of the Faraday angle by numerical methods and comparison with the Tore Supra and JET polarimeter electronics.

    PubMed

    Brault, C; Gil, C; Boboc, A; Spuig, P

    2011-04-01

    On the Tore Supra tokamak, a far infrared polarimeter diagnostic has been routinely used for diagnosing the current density by measuring the Faraday rotation angle. A high precision of measurement is needed to correctly reconstruct the current profile. To reach this precision, electronics used to compute the phase and the amplitude of the detected signals must have a good resilience to the noise in the measurement. In this article, the analogue card's response to the noise coming from the detectors and their impact on the Faraday angle measurements are analyzed, and we present numerical methods to calculate the phase and the amplitude. These validations have been done using real signals acquired by Tore Supra and JET experiments. These methods have been developed to be used in real-time in the future numerical cards that will replace the Tore Supra present analogue ones. © 2011 American Institute of Physics

  17. Application of Petri net based analysis techniques to signal transduction pathways.

    PubMed

    Sackmann, Andrea; Heiner, Monika; Koch, Ina

    2006-11-02

    Signal transduction pathways are usually modelled using classical quantitative methods, which are based on ordinary differential equations (ODEs). However, some difficulties are inherent in this approach. On the one hand, the kinetic parameters involved are often unknown and have to be estimated. With increasing size and complexity of signal transduction pathways, the estimation of missing kinetic data is not possible. On the other hand, ODEs based models do not support any explicit insights into possible (signal-) flows within the network. Moreover, a huge amount of qualitative data is available due to high-throughput techniques. In order to get information on the systems behaviour, qualitative analysis techniques have been developed. Applications of the known qualitative analysis methods concern mainly metabolic networks. Petri net theory provides a variety of established analysis techniques, which are also applicable to signal transduction models. In this context special properties have to be considered and new dedicated techniques have to be designed. We apply Petri net theory to model and analyse signal transduction pathways first qualitatively before continuing with quantitative analyses. This paper demonstrates how to build systematically a discrete model, which reflects provably the qualitative biological behaviour without any knowledge of kinetic parameters. The mating pheromone response pathway in Saccharomyces cerevisiae serves as case study. We propose an approach for model validation of signal transduction pathways based on the network structure only. For this purpose, we introduce the new notion of feasible t-invariants, which represent minimal self-contained subnets being active under a given input situation. Each of these subnets stands for a signal flow in the system. We define maximal common transition sets (MCT-sets), which can be used for t-invariant examination and net decomposition into smallest biologically meaningful functional units. The paper demonstrates how Petri net analysis techniques can promote a deeper understanding of signal transduction pathways. The new concepts of feasible t-invariants and MCT-sets have been proven to be useful for model validation and the interpretation of the biological system behaviour. Whereas MCT-sets provide a decomposition of the net into disjunctive subnets, feasible t-invariants describe subnets, which generally overlap. This work contributes to qualitative modelling and to the analysis of large biological networks by their fully automatic decomposition into biologically meaningful modules.

  18. Automated pathologies detection in retina digital images based on complex continuous wavelet transform phase angles.

    PubMed

    Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel

    2014-10-01

    An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.

  19. EDGE COMPUTING AND CONTEXTUAL INFORMATION FOR THE INTERNET OF THINGS SENSORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Levente

    Interpreting sensor data require knowledge about sensor placement and the surrounding environment. For a single sensor measurement, it is easy to document the context by visual observation, however for millions of sensors reporting data back to a server, the contextual information needs to be automatically extracted from either data analysis or leveraging complimentary data sources. Data layers that overlap spatially or temporally with sensor locations, can be used to extract the context and to validate the measurement. To minimize the amount of data transmitted through the internet, while preserving signal information content, two methods are explored; computation at the edgemore » and compressed sensing. We validate the above methods on wind and chemical sensor data (1) eliminate redundant measurement from wind sensors and (2) extract peak value of a chemical sensor measuring a methane plume. We present a general cloud based framework to validate sensor data based on statistical and physical modeling and contextual data extracted from geospatial data.« less

  20. Quantitative analysis of Sudan dye adulteration in paprika powder using FTIR spectroscopy.

    PubMed

    Lohumi, Santosh; Joshi, Ritu; Kandpal, Lalit Mohan; Lee, Hoonsoo; Kim, Moon S; Cho, Hyunjeong; Mo, Changyeun; Seo, Young-Wook; Rahman, Anisur; Cho, Byoung-Kwan

    2017-05-01

    As adulteration of foodstuffs with Sudan dye, especially paprika- and chilli-containing products, has been reported with some frequency, this issue has become one focal point for addressing food safety. FTIR spectroscopy has been used extensively as an analytical method for quality control and safety determination for food products. Thus, the use of FTIR spectroscopy for rapid determination of Sudan dye in paprika powder was investigated in this study. A net analyte signal (NAS)-based methodology, named HLA/GO (hybrid linear analysis in the literature), was applied to FTIR spectral data to predict Sudan dye concentration. The calibration and validation sets were designed to evaluate the performance of the multivariate method. The obtained results had a high determination coefficient (R 2 ) of 0.98 and low root mean square error (RMSE) of 0.026% for the calibration set, and an R 2 of 0.97 and RMSE of 0.05% for the validation set. The model was further validated using a second validation set and through the figures of merit, such as sensitivity, selectivity, and limits of detection and quantification. The proposed technique of FTIR combined with HLA/GO is rapid, simple and low cost, making this approach advantageous when compared with the main alternative methods based on liquid chromatography (LC) techniques.

  1. Impact of External Cue Validity on Driving Performance in Parkinson's Disease

    PubMed Central

    Scally, Karen; Charlton, Judith L.; Iansek, Robert; Bradshaw, John L.; Moss, Simon; Georgiou-Karistianis, Nellie

    2011-01-01

    This study sought to investigate the impact of external cue validity on simulated driving performance in 19 Parkinson's disease (PD) patients and 19 healthy age-matched controls. Braking points and distance between deceleration point and braking point were analysed for red traffic signals preceded either by Valid Cues (correctly predicting signal), Invalid Cues (incorrectly predicting signal), and No Cues. Results showed that PD drivers braked significantly later and travelled significantly further between deceleration and braking points compared with controls for Invalid and No-Cue conditions. No significant group differences were observed for driving performance in response to Valid Cues. The benefit of Valid Cues relative to Invalid Cues and No Cues was significantly greater for PD drivers compared with controls. Trail Making Test (B-A) scores correlated with driving performance for PDs only. These results highlight the importance of external cues and higher cognitive functioning for driving performance in mild to moderate PD. PMID:21789275

  2. RIPPLELAB: A Comprehensive Application for the Detection, Analysis and Classification of High Frequency Oscillations in Electroencephalographic Signals

    PubMed Central

    Alvarado-Rojas, Catalina; Le Van Quyen, Michel; Valderrama, Mario

    2016-01-01

    High Frequency Oscillations (HFOs) in the brain have been associated with different physiological and pathological processes. In epilepsy, HFOs might reflect a mechanism of epileptic phenomena, serving as a biomarker of epileptogenesis and epileptogenicity. Despite the valuable information provided by HFOs, their correct identification is a challenging task. A comprehensive application, RIPPLELAB, was developed to facilitate the analysis of HFOs. RIPPLELAB provides a wide range of tools for HFOs manual and automatic detection and visual validation; all of them are accessible from an intuitive graphical user interface. Four methods for automated detection—as well as several options for visualization and validation of detected events—were implemented and integrated in the application. Analysis of multiple files and channels is possible, and new options can be added by users. All features and capabilities implemented in RIPPLELAB for automatic detection were tested through the analysis of simulated signals and intracranial EEG recordings from epileptic patients (n = 16; 3,471 analyzed hours). Visual validation was also tested, and detected events were classified into different categories. Unlike other available software packages for EEG analysis, RIPPLELAB uniquely provides the appropriate graphical and algorithmic environment for HFOs detection (visual and automatic) and validation, in such a way that the power of elaborated detection methods are available to a wide range of users (experts and non-experts) through the use of this application. We believe that this open-source tool will facilitate and promote the collaboration between clinical and research centers working on the HFOs field. The tool is available under public license and is accessible through a dedicated web site. PMID:27341033

  3. Support vector machines for nuclear reactor state estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavaljevski, N.; Gross, K. C.

    2000-02-14

    Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformedmore » into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.« less

  4. In situ LTE exposure of the general public: Characterization and extrapolation.

    PubMed

    Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc

    2012-09-01

    In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.

  5. Causality within the Epileptic Network: An EEG-fMRI Study Validated by Intracranial EEG.

    PubMed

    Vaudano, Anna Elisabetta; Avanzini, Pietro; Tassi, Laura; Ruggieri, Andrea; Cantalupo, Gaetano; Benuzzi, Francesca; Nichelli, Paolo; Lemieux, Louis; Meletti, Stefano

    2013-01-01

    Accurate localization of the Seizure Onset Zone (SOZ) is crucial in patients with drug-resistance focal epilepsy. EEG with fMRI recording (EEG-fMRI) has been proposed as a complementary non-invasive tool, which can give useful additional information in the pre-surgical work-up. However, fMRI maps related to interictal epileptiform activities (IED) often show multiple regions of signal change, or "networks," rather than highly focal ones. Effective connectivity approaches like Dynamic Causal Modeling (DCM) applied to fMRI data potentially offers a framework to address which brain regions drives the generation of seizures and IED within an epileptic network. Here, we present a first attempt to validate DCM on EEG-fMRI data in one patient affected by frontal lobe epilepsy. Pre-surgical EEG-fMRI demonstrated two distinct clusters of blood oxygenation level dependent (BOLD) signal increases linked to IED, one located in the left frontal pole and the other in the ipsilateral dorso-lateral frontal cortex. DCM of the IED-related BOLD signal favored a model corresponding to the left dorso-lateral frontal cortex as driver of changes in the fronto-polar region. The validity of DCM was supported by: (a) the results of two different non-invasive analysis obtained on the same dataset: EEG source imaging (ESI), and "psycho-physiological interaction" analysis; (b) the failure of a first surgical intervention limited to the fronto-polar region; (c) the results of the intracranial EEG monitoring performed after the first surgical intervention confirming a SOZ located over the dorso-lateral frontal cortex. These results add evidence that EEG-fMRI together with advanced methods of BOLD signal analysis is a promising tool that can give relevant information within the epilepsy surgery diagnostic work-up.

  6. Signal processing and neural network toolbox and its application to failure diagnosis and prognosis

    NASA Astrophysics Data System (ADS)

    Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.

    2001-07-01

    Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.

  7. Garonne River monitoring from Signal-to-Noise Ratio data collected by a single geodetic receiver

    NASA Astrophysics Data System (ADS)

    Roussel, Nicolas; Frappart, Frédéric; Darrozes, José; Ramillien, Guillaume; Bonneton, Philippe; Bonneton, Natalie; Detandt, Guillaume; Roques, Manon; Orseau, Thomas

    2016-04-01

    GNSS-Reflectometry (GNSS-R) altimetry has demonstrated a strong potential for water level monitoring through the last decades. Interference Pattern Technique (IPT) based on the analysis of the Signal-to-Noise Ratio (SNR) estimated by a GNSS receiver, presents the main advantage of being applicable everywhere by using a single geodetic antenna and a classical GNSS receiver. Such a technique has already been tested in various configurations of acquisition of surface-reflected GNSS signals with an accuracy of a few centimeters. Nevertheless, classical SNR analysis method used to estimate the variations of the reflecting surface height h(t) has a limited domain of validity due to its variation rate dh/dt(t) assumed to be negligible. In [1], authors solve this problem with a "dynamic SNR method" taking the dynamic of the surface into account to conjointly estimate h(t) and dh/dt(t) over areas characterized by high amplitudes of tides. If the performance of this dynamic SNR method is already well-established for ocean monitoring [1], it was not validated in continental areas (i.e., river monitoring). We carried out a field study during 3 days in August and September, 2015, using a GNSS antenna to measure the water level variations in the Garonne River (France) in Podensac located 140 km downstream of the estuary mouth. In this site, the semi-diurnal tide amplitude reaches ~5 m. The antenna was located ~10 m above the water surface, and reflections of the GNSS electromagnetic waves on the Garonne River occur until 140 m from the antenna. Both classical SNR method and dynamic SNR method are tested and results are compared. [1] N. Roussel, G. Ramillien, F. Frappart, J. Darrozes, A. Gay, R. Biancale, N. Striebig, V. Hanquiez, X. Bertin, D. Allain : "Sea level monitoring and sea state estimate using a single geodetic receiver", Remote Sensing of Environment 171 (2015) 261-277.

  8. Cluster mass inference via random field theory.

    PubMed

    Zhang, Hui; Nichols, Thomas E; Johnson, Timothy D

    2009-01-01

    Cluster extent and voxel intensity are two widely used statistics in neuroimaging inference. Cluster extent is sensitive to spatially extended signals while voxel intensity is better for intense but focal signals. In order to leverage strength from both statistics, several nonparametric permutation methods have been proposed to combine the two methods. Simulation studies have shown that of the different cluster permutation methods, the cluster mass statistic is generally the best. However, to date, there is no parametric cluster mass inference available. In this paper, we propose a cluster mass inference method based on random field theory (RFT). We develop this method for Gaussian images, evaluate it on Gaussian and Gaussianized t-statistic images and investigate its statistical properties via simulation studies and real data. Simulation results show that the method is valid under the null hypothesis and demonstrate that it can be more powerful than the cluster extent inference method. Further, analyses with a single subject and a group fMRI dataset demonstrate better power than traditional cluster size inference, and good accuracy relative to a gold-standard permutation test.

  9. Reliability and validity of an audio signal modified shuttle walk test.

    PubMed

    Singla, Rupak; Rai, Richa; Faye, Abhishek Anil; Jain, Anil Kumar; Chowdhury, Ranadip; Bandyopadhyay, Debdutta

    2017-01-01

    The audio signal in the conventionally accepted protocol of shuttle walk test (SWT) is not well-understood by the patients and modification of the audio signal may improve the performance of the test. The aim of this study is to study the validity and reliability of an audio signal modified SWT, called the Singla-Richa modified SWT (SWTSR), in healthy normal adults. In SWTSR, the audio signal was modified with the addition of reverse counting to it. A total of 54 healthy normal adults underwent conventional SWT (CSWT) at one instance and two times SWTSRon the same day. The validity was assessed by comparing outcomes of the SWTSRto outcomes of CSWT using the Pearson correlation coefficient and Bland-Altman plot. Test-retest reliability of SWTSRwas assessed using the intraclass correlation coefficient (ICC). The acceptability of the modified test in comparison to the conventional test was assessed using Likert scale. The distance walked (mean ± standard deviation) in the CSWT and SWTSRtest was 853.33 ± 217.33 m and 857.22 ± 219.56 m, respectively (Pearson correlation coefficient - 0.98; P < 0.001) indicating SWTSRto be a valid test. The SWTSRwas found to be a reliable test with ICC of 0.98 (95% confidence interval: 0.97-0.99). The acceptability of SWTSRwas significantly higher than CSWT. The SWTSRwith modified audio signal with reverse counting is a reliable as well as a valid test when compared with CSWT in healthy normal adults. It better understood by subjects compared to CSWT.

  10. The Novel Nonlinear Adaptive Doppler Shift Estimation Technique and the Coherent Doppler Lidar System Validation Lidar

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.

    2006-01-01

    The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.

  11. Predicting Pharmacodynamic Drug-Drug Interactions through Signaling Propagation Interference on Protein-Protein Interaction Networks.

    PubMed

    Park, Kyunghyun; Kim, Docyong; Ha, Suhyun; Lee, Doheon

    2015-01-01

    As pharmacodynamic drug-drug interactions (PD DDIs) could lead to severe adverse effects in patients, it is important to identify potential PD DDIs in drug development. The signaling starting from drug targets is propagated through protein-protein interaction (PPI) networks. PD DDIs could occur by close interference on the same targets or within the same pathways as well as distant interference through cross-talking pathways. However, most of the previous approaches have considered only close interference by measuring distances between drug targets or comparing target neighbors. We have applied a random walk with restart algorithm to simulate signaling propagation from drug targets in order to capture the possibility of their distant interference. Cross validation with DrugBank and Kyoto Encyclopedia of Genes and Genomes DRUG shows that the proposed method outperforms the previous methods significantly. We also provide a web service with which PD DDIs for drug pairs can be analyzed at http://biosoft.kaist.ac.kr/targetrw.

  12. Optimization of a Precolumn OPA Derivatization HPLC Assay for Monitoring of l-Asparagine Depletion in Serum during l-Asparaginase Therapy.

    PubMed

    Zhang, Mei; Zhang, Yong; Ren, Siqi; Zhang, Zunjian; Wang, Yongren; Song, Rui

    2018-06-06

    A method for monitoring l-asparagine (ASN) depletion in patients' serum using reversed-phase high-performance liquid chromatography with precolumn o-phthalaldehyde and ethanethiol (ET) derivatization is described. In order to improve the signal and stability of analytes, several important factors including precipitant reagent, derivatization conditions and detection wavelengths were optimized. The recovery of the analytes in biological matrix was the highest when 4% sulfosalicylic acid (1:1, v/v) was used as a precipitant reagent. Optimal fluorescence detection parameters were determined as λex = 340 nm and λem = 444 nm for maximal signal. The signal of analytes was the highest when the reagent ET and borate buffer of pH 9.9 were used in the derivatization solution. And the corresponding derivative products were stable up to 19 h. The validated method had been successfully applied to monitor ASN depletion and l-aspartic acid, l-glutamine, l-glutamic acid levels in pediatric patients during l-asparaginase therapy.

  13. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  14. Characterizing the spatial variability of local and background concentration signals for air pollution at the neighbourhood scale

    NASA Astrophysics Data System (ADS)

    Shairsingh, Kerolyn K.; Jeong, Cheol-Heon; Wang, Jonathan M.; Evans, Greg J.

    2018-06-01

    Vehicle emissions represent a major source of air pollution in urban districts, producing highly variable concentrations of some pollutants within cities. The main goal of this study was to identify a deconvolving method so as to characterize variability in local, neighbourhood and regional background concentration signals. This method was validated by examining how traffic-related and non-traffic-related sources influenced the different signals. Sampling with a mobile monitoring platform was conducted across the Greater Toronto Area over a seven-day period during summer 2015. This mobile monitoring platform was equipped with instruments for measuring a wide range of pollutants at time resolutions of 1 s (ultrafine particles, black carbon) to 20 s (nitric oxide, nitrogen oxides). The monitored neighbourhoods were selected based on their land use categories (e.g. industrial, commercial, parks and residential areas). The high time-resolution data allowed pollutant concentrations to be separated into signals representing background and local concentrations. The background signals were determined using a spline of minimums; local signals were derived by subtracting the background concentration from the total concentration. Our study showed that temporal scales of 500 s and 2400 s were associated with the neighbourhood and regional background signals respectively. The percent contribution of the pollutant concentration that was attributed to local signals was highest for nitric oxide (NO) (37-95%) and lowest for ultrafine particles (9-58%); the ultrafine particles were predominantly regional (32-87%) in origin on these days. Local concentrations showed stronger associations than total concentrations with traffic intensity in a 100 m buffer (ρ:0.21-0.44). The neighbourhood scale signal also showed stronger associations with industrial facilities than the total concentrations. Given that the signals show stronger associations with different land use suggests that resolving the ambient concentrations differentiates which emission sources drive the variability in each signal. The benefit of this deconvolution method is that it may reduce exposure misclassification when coupled with predictive models.

  15. Is Abdominal Fetal Electrocardiography an Alternative to Doppler Ultrasound for FHR Variability Evaluation?

    PubMed Central

    Jezewski, Janusz; Wrobel, Janusz; Matonia, Adam; Horoba, Krzysztof; Martinek, Radek; Kupka, Tomasz; Jezewski, Michal

    2017-01-01

    Great expectations are connected with application of indirect fetal electrocardiography (FECG), especially for home telemonitoring of pregnancy. Evaluation of fetal heart rate (FHR) variability, when determined from FECG, uses the same criteria as for FHR signal acquired classically—through ultrasound Doppler method (US). Therefore, the equivalence of those two methods has to be confirmed, both in terms of recognizing classical FHR patterns: baseline, accelerations/decelerations (A/D), long-term variability (LTV), as well as evaluating the FHR variability with beat-to-beat accuracy—short-term variability (STV). The research material consisted of recordings collected from 60 patients in physiological and complicated pregnancy. The FHR signals of at least 30 min duration were acquired dually, using two systems for fetal and maternal monitoring, based on US and FECG methods. Recordings were retrospectively divided into normal (41) and abnormal (19) fetal outcome. The complex process of data synchronization and validation was performed. Obtained low level of the signal loss (4.5% for US and 1.8% for FECG method) enabled to perform both direct comparison of FHR signals, as well as indirect one—by using clinically relevant parameters. Direct comparison showed that there is no measurement bias between the acquisition methods, whereas the mean absolute difference, important for both visual and computer-aided signal analysis, was equal to 1.2 bpm. Such low differences do not affect the visual assessment of the FHR signal. However, in the indirect comparison the inconsistencies of several percent were noted. This mainly affects the acceleration (7.8%) and particularly deceleration (54%) patterns. In the signals acquired using the electrocardiography the obtained STV and LTV indices have shown significant overestimation by 10 and 50% respectively. It also turned out, that ability of clinical parameters to distinguish between normal and abnormal groups do not depend on the acquisition method. The obtained results prove that the abdominal FECG, considered as an alternative to the ultrasound approach, does not change the interpretation of the FHR signal, which was confirmed during both visual assessment and automated analysis. PMID:28559852

  16. Hetero-enzyme-based two-round signal amplification strategy for trace detection of aflatoxin B1 using an electrochemical aptasensor.

    PubMed

    Zheng, Wanli; Teng, Jun; Cheng, Lin; Ye, Yingwang; Pan, Daodong; Wu, Jingjing; Xue, Feng; Liu, Guodong; Chen, Wei

    2016-06-15

    An electrochemical aptasensor for trace detection of aflatoxin B1 (AFB1) was developed by using an aptamer as the recognition unit while adopting the telomerase and EXO III based two-round signal amplification strategy as the signal enhancement units. The telomerase amplification was used to elongate the ssDNA probes on the surface of gold nanoparticles, by which the signal response range of the signal-off model electrochemical aptasensor could be correspondingly enlarged. Then, the EXO III amplification was used to hydrolyze the 3'-end of the dsDNA after the recognition of target AFB1, which caused the release of bounded AFB1 into the sensing system, where it participated in the next recognition-sensing cycle. With this two-round signal amplified electrochemical aptasensor, target AFB1 was successfully measured at trace concentrations with excellent detection limit of 0.6*10(-4)ppt and satisfied specificity due to the excellent affinity of the aptamer against AFB1. Based on this designed two-round signal amplification strategy, both the sensing range and detection limit were greatly improved. This proposed ultrasensitive electrochemical aptasensor method was also validated by comparison with the classic instrumental methods. Importantly, this hetero-enzyme based two-round signal amplified electrochemical aptasensor offers a great promising protocol for ultrasensitive detection of AFB1 and other mycotoxins by replacing the core recognition sequence of the aptamer. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. THD-Module Extractor: An Application for CEN Module Extraction and Interesting Gene Identification for Alzheimer's Disease.

    PubMed

    Kakati, Tulika; Kashyap, Hirak; Bhattacharyya, Dhruba K

    2016-11-30

    There exist many tools and methods for construction of co-expression network from gene expression data and for extraction of densely connected gene modules. In this paper, a method is introduced to construct co-expression network and to extract co-expressed modules having high biological significance. The proposed method has been validated on several well known microarray datasets extracted from a diverse set of species, using statistical measures, such as p and q values. The modules obtained in these studies are found to be biologically significant based on Gene Ontology enrichment analysis, pathway analysis, and KEGG enrichment analysis. Further, the method was applied on an Alzheimer's disease dataset and some interesting genes are found, which have high semantic similarity among them, but are not significantly correlated in terms of expression similarity. Some of these interesting genes, such as MAPT, CASP2, and PSEN2, are linked with important aspects of Alzheimer's disease, such as dementia, increase cell death, and deposition of amyloid-beta proteins in Alzheimer's disease brains. The biological pathways associated with Alzheimer's disease, such as, Wnt signaling, Apoptosis, p53 signaling, and Notch signaling, incorporate these interesting genes. The proposed method is evaluated in regard to existing literature.

  18. THD-Module Extractor: An Application for CEN Module Extraction and Interesting Gene Identification for Alzheimer’s Disease

    PubMed Central

    Kakati, Tulika; Kashyap, Hirak; Bhattacharyya, Dhruba K.

    2016-01-01

    There exist many tools and methods for construction of co-expression network from gene expression data and for extraction of densely connected gene modules. In this paper, a method is introduced to construct co-expression network and to extract co-expressed modules having high biological significance. The proposed method has been validated on several well known microarray datasets extracted from a diverse set of species, using statistical measures, such as p and q values. The modules obtained in these studies are found to be biologically significant based on Gene Ontology enrichment analysis, pathway analysis, and KEGG enrichment analysis. Further, the method was applied on an Alzheimer’s disease dataset and some interesting genes are found, which have high semantic similarity among them, but are not significantly correlated in terms of expression similarity. Some of these interesting genes, such as MAPT, CASP2, and PSEN2, are linked with important aspects of Alzheimer’s disease, such as dementia, increase cell death, and deposition of amyloid-beta proteins in Alzheimer’s disease brains. The biological pathways associated with Alzheimer’s disease, such as, Wnt signaling, Apoptosis, p53 signaling, and Notch signaling, incorporate these interesting genes. The proposed method is evaluated in regard to existing literature. PMID:27901073

  19. Aptamer-mediated colorimetric method for rapid and sensitive detection of chloramphenicol in food.

    PubMed

    Yan, Chao; Zhang, Jing; Yao, Li; Xue, Feng; Lu, Jianfeng; Li, Baoguang; Chen, Wei

    2018-09-15

    We report an aptamer-mediated colorimetric method for sensitive detection of chloramphenicol (CAP). The aptamer of CAP is immobilized by the hybridization with pre-immobilized capture probe in the microtiter plate. The horseradish peroxidase (HRP) is covalently attached to the aptamer by the biotin-streptavidin system for signal production. CAP will preferably bind with aptamer due to the high binding affinity, which attributes to the release of aptamer and HRP and thus, affects the optical signal intensity. Quantitative determination of CAP is successfully achieved in the wide range from 0.001 to 1000 ng/mL with detection limit of 0.0031 ng/mL, which is more sensitive than traditional immunoassays. This method is further validated by measuring the recovery of CAP spiked in two different food matrices (honey and fish). The aptamer-mediated colorimetric method can be a useful protocol for rapid and sensitive screening of CAP, and may be used as an alternative means for traditional immunoassays. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots

    PubMed Central

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-01-01

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases. PMID:26528986

  1. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots.

    PubMed

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-10-30

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases.

  2. An Improved Time-Frequency Analysis Method in Interference Detection for GNSS Receivers

    PubMed Central

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-01-01

    In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner–Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments. PMID:25905704

  3. Evaluation of arterial stiffness by finger-toe pulse wave velocity: optimization of signal processing and clinical validation.

    PubMed

    Obeid, Hasan; Khettab, Hakim; Marais, Louise; Hallab, Magid; Laurent, Stéphane; Boutouyrie, Pierre

    2017-08-01

    Carotid-femoral pulse wave velocity (PWV) (cf-PWV) is the gold standard for measuring aortic stiffness. Finger-toe PWV (ft-PWV) is a simpler noninvasive method for measuring arterial stiffness. Although the validity of the method has been previously assessed, its accuracy can be improved. ft-PWV is determined on the basis of a patented height chart for the distance and the pulse transit time (PTT) between the finger and the toe pulpar arteries signals (ft-PTT). The objective of the first study, performed in 66 patients, was to compare different algorithms (intersecting tangents, maximum of the second derivative, 10% threshold and cross-correlation) for determining the foot of the arterial pulse wave, thus the ft-PTT. The objective of the second study, performed in 101 patients, was to investigate different signal processing chains to improve the concordance of ft-PWV with the gold-standard cf-PWV. Finger-toe PWV (ft-PWV) was calculated using the four algorithms. The best correlations relating ft-PWV and cf-PWV, and relating ft-PTT and carotid-femoral PTT were obtained with the maximum of the second derivative algorithm [PWV: r = 0.56, P < 0.0001, root mean square error (RMSE) = 0.9 m/s; PTT: r = 0.61, P < 0.001, RMSE = 12 ms]. The three other algorithms showed lower correlations. The correlation between ft-PTT and carotid-femoral PTT further improved (r = 0.81, P < 0.0001, RMSE = 5.4 ms) when the maximum of the second derivative algorithm was combined with an optimized signal processing chain. Selecting the maximum of the second derivative algorithm for detecting the foot of the pressure waveform, and combining it with an optimized signal processing chain, improved the accuracy of ft-PWV measurement in the current population sample. Thus, it makes ft-PWV very promising for the simple noninvasive determination of aortic stiffness in clinical practice.

  4. Towards Hydrological Applications of Stationary and Roving Cosmic-Ray Neutron Sensors in the Light of Spatial Sensitivity

    NASA Astrophysics Data System (ADS)

    Schrön, M.; Köhli, M.; Rosolem, R.; Baroni, G.; Bogena, H. R.; Brenner, J.; Zink, M.; Rebmann, C.; Oswald, S. E.; Dietrich, P.; Samaniego, L. E.; Zacharias, S.

    2017-12-01

    Cosmic-Ray Neutron Sensing (CRNS) has become a promising and unique method to monitor water content at an effective scale of tens of hectares in area and tens of centimeters in depth. The large footprint is particularly beneficial for hydrological models that operate at these scales.However, reliable estimates of average soil moisture require a detailed knowledge about the sensitivity of the signal to spatial inhomogeneity within the footprint. From this perspective, the large integrating volume challenges data interpretation, validation, and calibration of the sensor. Can we still generate reliable data for hydrological applications? One of the top challenges in the last years was to find out where the signal comes from, and how sensitive it is to spatial variabilities of moisture. Neutron physics simulations have shown that the neutron signal represents a non-linearly weighted average of soil water in the footprint. With the help of the so-called spatial sensitivity functions it is now possible to quantify the contribution of certain regions to the neutron signal. We present examples of how this knowledge can help (1) to understand the contribution of irrigated and sealed areas in the footprint, (2) to improve calibration and validation of the method, and (3) to even reveal excess water storages, e.g. from ponding or rain interception.The spatial sensitivity concept can also explain the influence of dry roads on the neutron signal. Mobile surveys with the CRNS rover have been a common practice to measure soil moisture patterns at the kilometer scale. However, dedicated experiments across agricultural fields in Germany and England have revealed that field soil moisture is significantly underestimated when moving the sensor on roads. We show that knowledge about the spatial sensitivity helps to correct survey data for these effects, depending on road material, width, and distance from the road. The recent methodological advances allow for improved signal interpretability and for more accurate derivation of hydrologically relevant features from the CRNS data. By this, the presented methods are an essential contribution to generate reliable CRNS products and an example how combined efforts from the CRNS community contribute to turn the instrument to a highly capable tool for hydrological applications.

  5. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses hidden in vibration signals and performs well for bearing fault diagnosis.

  6. EEMD-MUSIC-Based Analysis for Natural Frequencies Identification of Structures Using Artificial and Natural Excitations

    PubMed Central

    Amezquita-Sanchez, Juan P.; Romero-Troncoso, Rene J.; Osornio-Rios, Roque A.; Garcia-Perez, Arturo

    2014-01-01

    This paper presents a new EEMD-MUSIC- (ensemble empirical mode decomposition-multiple signal classification-) based methodology to identify modal frequencies in structures ranging from free and ambient vibration signals produced by artificial and natural excitations and also considering several factors as nonstationary effects, close modal frequencies, and noisy environments, which are common situations where several techniques reported in literature fail. The EEMD and MUSIC methods are used to decompose the vibration signal into a set of IMFs (intrinsic mode functions) and to identify the natural frequencies of a structure, respectively. The effectiveness of the proposed methodology has been validated and tested with synthetic signals and under real operating conditions. The experiments are focused on extracting the natural frequencies of a truss-type scaled structure and of a bridge used for both highway traffic and pedestrians. Results show the proposed methodology as a suitable solution for natural frequencies identification of structures from free and ambient vibration signals. PMID:24683346

  7. EEMD-MUSIC-based analysis for natural frequencies identification of structures using artificial and natural excitations.

    PubMed

    Camarena-Martinez, David; Amezquita-Sanchez, Juan P; Valtierra-Rodriguez, Martin; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Garcia-Perez, Arturo

    2014-01-01

    This paper presents a new EEMD-MUSIC- (ensemble empirical mode decomposition-multiple signal classification-) based methodology to identify modal frequencies in structures ranging from free and ambient vibration signals produced by artificial and natural excitations and also considering several factors as nonstationary effects, close modal frequencies, and noisy environments, which are common situations where several techniques reported in literature fail. The EEMD and MUSIC methods are used to decompose the vibration signal into a set of IMFs (intrinsic mode functions) and to identify the natural frequencies of a structure, respectively. The effectiveness of the proposed methodology has been validated and tested with synthetic signals and under real operating conditions. The experiments are focused on extracting the natural frequencies of a truss-type scaled structure and of a bridge used for both highway traffic and pedestrians. Results show the proposed methodology as a suitable solution for natural frequencies identification of structures from free and ambient vibration signals.

  8. Impact of the Test Device on the Behavior of the Acoustic Emission Signals: Contribution of the Numerical Modeling to Signal Processing

    NASA Astrophysics Data System (ADS)

    Issiaka Traore, Oumar; Cristini, Paul; Favretto-Cristini, Nathalie; Pantera, Laurent; Viguier-Pla, Sylvie

    2018-01-01

    In a context of nuclear safety experiment monitoring with the non destructive testing method of acoustic emission, we study the impact of the test device on the interpretation of the recorded physical signals by using spectral finite element modeling. The numerical results are validated by comparison with real acoustic emission data obtained from previous experiments. The results show that several parameters can have significant impacts on acoustic wave propagation and then on the interpretation of the physical signals. The potential position of the source mechanism, the positions of the receivers and the nature of the coolant fluid have to be taken into account in the definition a pre-processing strategy of the real acoustic emission signals. In order to show the relevance of such an approach, we use the results to propose an optimization of the positions of the acoustic emission sensors in order to reduce the estimation bias of the time-delay and then improve the localization of the source mechanisms.

  9. Early Warning Signals of Ecological Transitions: Methods for Spatial Patterns

    PubMed Central

    Brock, William A.; Carpenter, Stephen R.; Ellison, Aaron M.; Livina, Valerie N.; Seekell, David A.; Scheffer, Marten; van Nes, Egbert H.; Dakos, Vasilis

    2014-01-01

    A number of ecosystems can exhibit abrupt shifts between alternative stable states. Because of their important ecological and economic consequences, recent research has focused on devising early warning signals for anticipating such abrupt ecological transitions. In particular, theoretical studies show that changes in spatial characteristics of the system could provide early warnings of approaching transitions. However, the empirical validation of these indicators lag behind their theoretical developments. Here, we summarize a range of currently available spatial early warning signals, suggest potential null models to interpret their trends, and apply them to three simulated spatial data sets of systems undergoing an abrupt transition. In addition to providing a step-by-step methodology for applying these signals to spatial data sets, we propose a statistical toolbox that may be used to help detect approaching transitions in a wide range of spatial data. We hope that our methodology together with the computer codes will stimulate the application and testing of spatial early warning signals on real spatial data. PMID:24658137

  10. Human Age Recognition by Electrocardiogram Signal Based on Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Dasgupta, Hirak

    2016-12-01

    The objective of this work is to make a neural network function approximation model to detect human age from the electrocardiogram (ECG) signal. The input vectors of the neural network are the Katz fractal dimension of the ECG signal, frequencies in the QRS complex, male or female (represented by numeric constant) and the average of successive R-R peak distance of a particular ECG signal. The QRS complex has been detected by short time Fourier transform algorithm. The successive R peak has been detected by, first cutting the signal into periods by auto-correlation method and then finding the absolute of the highest point in each period. The neural network used in this problem consists of two layers, with Sigmoid neuron in the input and linear neuron in the output layer. The result shows the mean of errors as -0.49, 1.03, 0.79 years and the standard deviation of errors as 1.81, 1.77, 2.70 years during training, cross validation and testing with unknown data sets, respectively.

  11. A wavelet-based ECG delineation algorithm for 32-bit integer online processing

    PubMed Central

    2011-01-01

    Background Since the first well-known electrocardiogram (ECG) delineator based on Wavelet Transform (WT) presented by Li et al. in 1995, a significant research effort has been devoted to the exploitation of this promising method. Its ability to reliably delineate the major waveform components (mono- or bi-phasic P wave, QRS, and mono- or bi-phasic T wave) would make it a suitable candidate for efficient online processing of ambulatory ECG signals. Unfortunately, previous implementations of this method adopt non-linear operators such as root mean square (RMS) or floating point algebra, which are computationally demanding. Methods This paper presents a 32-bit integer, linear algebra advanced approach to online QRS detection and P-QRS-T waves delineation of a single lead ECG signal, based on WT. Results The QRS detector performance was validated on the MIT-BIH Arrhythmia Database (sensitivity Se = 99.77%, positive predictive value P+ = 99.86%, on 109010 annotated beats) and on the European ST-T Database (Se = 99.81%, P+ = 99.56%, on 788050 annotated beats). The ECG delineator was validated on the QT Database, showing a mean error between manual and automatic annotation below 1.5 samples for all fiducial points: P-onset, P-peak, P-offset, QRS-onset, QRS-offset, T-peak, T-offset, and a mean standard deviation comparable to other established methods. Conclusions The proposed algorithm exhibits reliable QRS detection as well as accurate ECG delineation, in spite of a simple structure built on integer linear algebra. PMID:21457580

  12. Patient-Specific Seizure Detection in Long-Term EEG Using Signal-Derived Empirical Mode Decomposition (EMD)-based Dictionary Approach.

    PubMed

    Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar

    2018-06-25

    The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability of the approach for seizure detection in long-term multi-channel EEG recordings is discussed. Significance: The proposed approach describes a computationally efficient method for automatic seizure detection in long-term multi-channel EEG recordings. The method does not rely on hand-engineered features, as are required in traditional approaches. Furthermore, the approach is suitable for scenarios where the dictionary once formed and trained can be used for automatic seizure detection of newly recorded data, making the approach suitable for long-term multi-channel EEG recordings. © 2018 IOP Publishing Ltd.

  13. Testing for nonlinearity in non-stationary physiological time series.

    PubMed

    Guarín, Diego; Delgado, Edilson; Orozco, Álvaro

    2011-01-01

    Testing for nonlinearity is one of the most important preprocessing steps in nonlinear time series analysis. Typically, this is done by means of the linear surrogate data methods. But it is a known fact that the validity of the results heavily depends on the stationarity of the time series. Since most physiological signals are non-stationary, it is easy to falsely detect nonlinearity using the linear surrogate data methods. In this document, we propose a methodology to extend the procedure for generating constrained surrogate time series in order to assess nonlinearity in non-stationary data. The method is based on the band-phase-randomized surrogates, which consists (contrary to the linear surrogate data methods) in randomizing only a portion of the Fourier phases in the high frequency domain. Analysis of simulated time series showed that in comparison to the linear surrogate data method, our method is able to discriminate between linear stationarity, linear non-stationary and nonlinear time series. Applying our methodology to heart rate variability (HRV) records of five healthy patients, we encountered that nonlinear correlations are present in this non-stationary physiological signals.

  14. Ocean Wave Separation Using CEEMD-Wavelet in GPS Wave Measurement.

    PubMed

    Wang, Junjie; He, Xiufeng; Ferreira, Vagner G

    2015-08-07

    Monitoring ocean waves plays a crucial role in, for example, coastal environmental and protection studies. Traditional methods for measuring ocean waves are based on ultrasonic sensors and accelerometers. However, the Global Positioning System (GPS) has been introduced recently and has the advantage of being smaller, less expensive, and not requiring calibration in comparison with the traditional methods. Therefore, for accurately measuring ocean waves using GPS, further research on the separation of the wave signals from the vertical GPS-mounted carrier displacements is still necessary. In order to contribute to this topic, we present a novel method that combines complementary ensemble empirical mode decomposition (CEEMD) with a wavelet threshold denoising model (i.e., CEEMD-Wavelet). This method seeks to extract wave signals with less residual noise and without losing useful information. Compared with the wave parameters derived from the moving average skill, high pass filter and wave gauge, the results show that the accuracy of the wave parameters for the proposed method was improved with errors of about 2 cm and 0.2 s for mean wave height and mean period, respectively, verifying the validity of the proposed method.

  15. Monitoring tooth profile faults in epicyclic gearboxes using synchronously averaged motor currents: Mathematical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Ottewill, J. R.; Ruszczyk, A.; Broda, D.

    2017-02-01

    Time-varying transmission paths and inaccessibility can increase the difficulty in both acquiring and processing vibration signals for the purpose of monitoring epicyclic gearboxes. Recent work has shown that the synchronous signal averaging approach may be applied to measured motor currents in order to diagnose tooth faults in parallel shaft gearboxes. In this paper we further develop the approach, so that it may also be applied to monitor tooth faults in epicyclic gearboxes. A low-degree-of-freedom model of an epicyclic gearbox which incorporates the possibility of simulating tooth faults, as well as any subsequent tooth contact loss due to these faults, is introduced. By combining this model with a simple space-phasor model of an induction motor it is possible to show that, in theory, tooth faults in epicyclic gearboxes may be identified from motor currents. Applying the synchronous averaging approach to experimentally recorded motor currents and angular displacements recorded from a shaft mounted encoder, validate this finding. Comparison between experiments and theory highlight the influence of operating conditions, backlash and shaft couplings on the transient response excited in the currents by the tooth fault. The results obtained suggest that the method may be a viable alternative or complement to more traditional methods for monitoring gearboxes. However, general observations also indicate that further investigations into the sensitivity and robustness of the method would be beneficial.

  16. Nonlinear Simulation of the Tooth Enamel Spectrum for EPR Dosimetry

    NASA Astrophysics Data System (ADS)

    Kirillov, V. A.; Dubovsky, S. V.

    2016-07-01

    Software was developed where initial EPR spectra of tooth enamel were deconvoluted based on nonlinear simulation, line shapes and signal amplitudes in the model initial spectrum were calculated, the regression coefficient was evaluated, and individual spectra were summed. Software validation demonstrated that doses calculated using it agreed excellently with the applied radiation doses and the doses reconstructed by the method of additive doses.

  17. Novel optical scanning cryptography using Fresnel telescope imaging.

    PubMed

    Yan, Aimin; Sun, Jianfeng; Hu, Zhijuan; Zhang, Jingtao; Liu, Liren

    2015-07-13

    We propose a new method called modified optical scanning cryptography using Fresnel telescope imaging technique for encryption and decryption of remote objects. An image or object can be optically encrypted on the fly by Fresnel telescope scanning system together with an encryption key. For image decryption, the encrypted signals are received and processed with an optical coherent heterodyne detection system. The proposed method has strong performance through use of secure Fresnel telescope scanning with orthogonal polarized beams and efficient all-optical information processing. The validity of the proposed method is demonstrated by numerical simulations and experimental results.

  18. An Illumination-Adaptive Colorimetric Measurement Using Color Image Sensor

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Lee, Jong-Hyub; Sohng, Kyu-Ik

    An image sensor for a use of colorimeter is characterized based on the CIE standard colorimetric observer. We use the method of least squares to derive a colorimetric characterization matrix between RGB output signals and CIE XYZ tristimulus values. This paper proposes an adaptive measuring method to obtain the chromaticity of colored scenes and illumination through a 3×3 camera transfer matrix under a certain illuminant. Camera RGB outputs, sensor status values, and photoelectric characteristic are used to obtain the chromaticity. Experimental results show that the proposed method is valid in the measuring performance.

  19. A novel fiber-optical vibration defending system with on-line intelligent identification function

    NASA Astrophysics Data System (ADS)

    Wu, Huijuan; Xie, Xin; Li, Hanyu; Li, Xiaoyu; Wu, Yu; Gong, Yuan; Rao, Yunjiang

    2013-09-01

    Capacity of the sensor network is always a bottleneck problem for the novel FBG-based quasi-distributed fiberoptical defending system. In this paper, a highly sensitive sensing network with FBG vibration sensors is presented to relieve stress of the capacity and the system cost. However, higher sensitivity may cause higher Nuisance Alarm Rates (NARs) in practical uses. It is necessary to further classify the intrusion pattern or threat level and determine the validity of an unexpected event. Then an intelligent identification method is proposed by extracting the statistical features of the vibration signals in the time domain, and inputting them into a 3-layer Back-Propagation(BP) Artificial Neural Network to classify the events of interest. Experiments of both simulation and field tests are carried out to validate its effectiveness. The results show the recognition rate can be achieved up to 100% for the simulation signals and as high as 96.03% in the real tests.

  20. Theory and simulations of covariance mapping in multiple dimensions for data analysis in high-event-rate experiments

    NASA Astrophysics Data System (ADS)

    Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.

    2014-05-01

    Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.

  1. A Multi-Class Proportional Myocontrol Algorithm for Upper Limb Prosthesis Control: Validation in Real-Life Scenarios on Amputees.

    PubMed

    Amsuess, Sebastian; Goebel, Peter; Graimann, Bernhard; Farina, Dario

    2015-09-01

    Functional replacement of upper limbs by means of dexterous prosthetic devices remains a technological challenge. While the mechanical design of prosthetic hands has advanced rapidly, the human-machine interfacing and the control strategies needed for the activation of multiple degrees of freedom are not reliable enough for restoring hand function successfully. Machine learning methods capable of inferring the user intent from EMG signals generated by the activation of the remnant muscles are regarded as a promising solution to this problem. However, the lack of robustness of the current methods impedes their routine clinical application. In this study, we propose a novel algorithm for controlling multiple degrees of freedom sequentially, inherently proportionally and with high robustness, allowing a good level of prosthetic hand function. The control algorithm is based on the spatial linear combinations of amplitude-related EMG signal features. The weighting coefficients in this combination are derived from the optimization criterion of the common spatial patterns filters which allow for maximal discriminability between movements. An important component of the study is the validation of the method which was performed on both able-bodied and amputee subjects who used physical prostheses with customized sockets and performed three standardized functional tests mimicking daily-life activities of varying difficulty. Moreover, the new method was compared in the same conditions with one clinical/industrial and one academic state-of-the-art method. The novel algorithm outperformed significantly the state-of-the-art techniques in both subject groups for tests that required the activation of more than one degree of freedom. Because of the evaluation in real time control on both able-bodied subjects and final users (amputees) wearing physical prostheses, the results obtained allow for the direct extrapolation of the benefits of the proposed method for the end users. In conclusion, the method proposed and validated in real-life use scenarios, allows the practical usability of multifunctional hand prostheses in an intuitive way, with significant advantages with respect to previous systems.

  2. An ECG signals compression method and its validation using NNs.

    PubMed

    Fira, Catalina Monica; Goras, Liviu

    2008-04-01

    This paper presents a new algorithm for electrocardiogram (ECG) signal compression based on local extreme extraction, adaptive hysteretic filtering and Lempel-Ziv-Welch (LZW) coding. The algorithm has been verified using eight of the most frequent normal and pathological types of cardiac beats and an multi-layer perceptron (MLP) neural network trained with original cardiac patterns and tested with reconstructed ones. Aspects regarding the possibility of using the principal component analysis (PCA) to cardiac pattern classification have been investigated as well. A new compression measure called "quality score," which takes into account both the reconstruction errors and the compression ratio, is proposed.

  3. Kurtosis Approach Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.

  4. True logarithmic amplification of frequency clock in SS-OCT for calibration

    PubMed Central

    Liu, Bin; Azimi, Ehsan; Brezinski, Mark E.

    2011-01-01

    With swept source optical coherence tomography (SS-OCT), imprecise signal calibration prevents optimal imaging of biological tissues such as coronary artery. This work demonstrates an approach using a true logarithmic amplifier to precondition the clock signal, with the effort to minimize the noises and phase errors for optimal calibration. This method was validated and tested with a high-speed SS-OCT. The experimental results manifest its superior ability on optimization of the calibration and improvement of the imaging performance. Particularly, this hardware-based approach is suitable for real-time calibration in a high-speed system where computation time is constrained. PMID:21698036

  5. Interferometric millimeter wave and THz wave doppler radar

    DOEpatents

    Liao, Shaolin; Gopalsami, Nachappa; Bakhtiari, Sasan; Raptis, Apostolos C.; Elmer, Thomas

    2015-08-11

    A mixerless high frequency interferometric Doppler radar system and methods has been invented, numerically validated and experimentally tested. A continuous wave source, phase modulator (e.g., a continuously oscillating reference mirror) and intensity detector are utilized. The intensity detector measures the intensity of the combined reflected Doppler signal and the modulated reference beam. Rigorous mathematics formulas have been developed to extract bot amplitude and phase from the measured intensity signal. Software in Matlab has been developed and used to extract such amplitude and phase information from the experimental data. Both amplitude and phase are calculated and the Doppler frequency signature of the object is determined.

  6. Sector-Based Detection for Hands-Free Speech Enhancement in Cars

    NASA Astrophysics Data System (ADS)

    Lathoud, Guillaume; Bourgeois, Julien; Freudenberger, Jürgen

    2006-12-01

    Adaptation control of beamforming interference cancellation techniques is investigated for in-car speech acquisition. Two efficient adaptation control methods are proposed that avoid target cancellation. The "implicit" method varies the step-size continuously, based on the filtered output signal. The "explicit" method decides in a binary manner whether to adapt or not, based on a novel estimate of target and interference energies. It estimates the average delay-sum power within a volume of space, for the same cost as the classical delay-sum. Experiments on real in-car data validate both methods, including a case with[InlineEquation not available: see fulltext.] km/h background road noise.

  7. Explicit-Duration Hidden Markov Model Inference of UP-DOWN States from Continuous Signals

    PubMed Central

    McFarland, James M.; Hahn, Thomas T. G.; Mehta, Mayank R.

    2011-01-01

    Neocortical neurons show UP-DOWN state (UDS) oscillations under a variety of conditions. These UDS have been extensively studied because of the insight they can yield into the functioning of cortical networks, and their proposed role in putative memory formation. A key element in these studies is determining the precise duration and timing of the UDS. These states are typically determined from the membrane potential of one or a small number of cells, which is often not sufficient to reliably estimate the state of an ensemble of neocortical neurons. The local field potential (LFP) provides an attractive method for determining the state of a patch of cortex with high spatio-temporal resolution; however current methods for inferring UDS from LFP signals lack the robustness and flexibility to be applicable when UDS properties may vary substantially within and across experiments. Here we present an explicit-duration hidden Markov model (EDHMM) framework that is sufficiently general to allow statistically principled inference of UDS from different types of signals (membrane potential, LFP, EEG), combinations of signals (e.g., multichannel LFP recordings) and signal features over long recordings where substantial non-stationarities are present. Using cortical LFPs recorded from urethane-anesthetized mice, we demonstrate that the proposed method allows robust inference of UDS. To illustrate the flexibility of the algorithm we show that it performs well on EEG recordings as well. We then validate these results using simultaneous recordings of the LFP and membrane potential (MP) of nearby cortical neurons, showing that our method offers significant improvements over standard methods. These results could be useful for determining functional connectivity of different brain regions, as well as understanding network dynamics. PMID:21738730

  8. Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data

    NASA Astrophysics Data System (ADS)

    Jia, Feng; Lei, Yaguo; Lin, Jing; Zhou, Xin; Lu, Na

    2016-05-01

    Aiming to promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rotating machinery. Among these studies, the methods based on artificial neural networks (ANNs) are commonly used, which employ signal processing techniques for extracting features and further input the features to ANNs for classifying faults. Though these methods did work in intelligent fault diagnosis of rotating machinery, they still have two deficiencies. (1) The features are manually extracted depending on much prior knowledge about signal processing techniques and diagnostic expertise. In addition, these manual features are extracted according to a specific diagnosis issue and probably unsuitable for other issues. (2) The ANNs adopted in these methods have shallow architectures, which limits the capacity of ANNs to learn the complex non-linear relationships in fault diagnosis issues. As a breakthrough in artificial intelligence, deep learning holds the potential to overcome the aforementioned deficiencies. Through deep learning, deep neural networks (DNNs) with deep architectures, instead of shallow ones, could be established to mine the useful information from raw data and approximate complex non-linear functions. Based on DNNs, a novel intelligent method is proposed in this paper to overcome the deficiencies of the aforementioned intelligent diagnosis methods. The effectiveness of the proposed method is validated using datasets from rolling element bearings and planetary gearboxes. These datasets contain massive measured signals involving different health conditions under various operating conditions. The diagnosis results show that the proposed method is able to not only adaptively mine available fault characteristics from the measured signals, but also obtain superior diagnosis accuracy compared with the existing methods.

  9. Cy5 total protein normalization in Western blot analysis.

    PubMed

    Hagner-McWhirter, Åsa; Laurin, Ylva; Larsson, Anita; Bjerneld, Erik J; Rönn, Ola

    2015-10-01

    Western blotting is a widely used method for analyzing specific target proteins in complex protein samples. Housekeeping proteins are often used for normalization to correct for uneven sample loads, but these require careful validation since expression levels may vary with cell type and treatment. We present a new, more reliable method for normalization using Cy5-prelabeled total protein as a loading control. We used a prelabeling protocol based on Cy5 N-hydroxysuccinimide ester labeling that produces a linear signal response. We obtained a low coefficient of variation (CV) of 7% between the ratio of extracellular signal-regulated kinase (ERK1/2) target to Cy5 total protein control signals over the whole loading range from 2.5 to 20.0μg of Chinese hamster ovary cell lysate protein. Corresponding experiments using actin or tubulin as controls for normalization resulted in CVs of 13 and 18%, respectively. Glyceraldehyde-3-phosphate dehydrogenase did not produce a proportional signal and was not suitable for normalization in these cells. A comparison of ERK1/2 signals from labeled and unlabeled samples showed that Cy5 prelabeling did not affect antibody binding. By using total protein normalization we analyzed PP2A and Smad2/3 levels with high confidence. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Bearing fault diagnosis using a whale optimization algorithm-optimized orthogonal matching pursuit with a combined time-frequency atom dictionary

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-07-01

    Condition monitoring and fault diagnosis of rolling element bearings are significant to guarantee the reliability and functionality of a mechanical system, production efficiency, and plant safety. However, this is almost invariably a formidable challenge because the fault features are often buried by strong background noises and other unstable interference components. To satisfactorily extract the bearing fault features, a whale optimization algorithm (WOA)-optimized orthogonal matching pursuit (OMP) with a combined time-frequency atom dictionary is proposed in this paper. Firstly, a combined time-frequency atom dictionary whose atom is a combination of Fourier dictionary atom and impact time-frequency dictionary atom is designed according to the properties of bearing fault vibration signal. Furthermore, to improve the efficiency and accuracy of signal sparse representation, the WOA is introduced into the OMP algorithm to optimize the atom parameters for best approximating the original signal with the dictionary atoms. The proposed method is validated through analyzing the bearing fault simulation signal and the real vibration signals collected from an experimental bearing and a wheelset bearing of high-speed trains. The comparisons with the respect to the state of the art in the field are illustrated in detail, which highlight the advantages of the proposed method.

  11. Transferring Data from Smartwatch to Smartphone through Mechanical Wave Propagation.

    PubMed

    Kim, Seung-Chan; Lim, Soo-Chul

    2015-08-28

    Inspired by the mechanisms of bone conduction transmission, we present a novel sensor and actuation system that enables a smartwatch to securely communicate with a peripheral touch device, such as a smartphone. Our system regards hand structures as a mechanical waveguide that transmits particular signals through mechanical waves. As a signal, we used high-frequency vibrations (18.0-20.0 kHz) so that users cannot sense the signals either tactually or audibly. To this end, we adopted a commercial surface transducer, which is originally developed as a bone-conduction actuator, for mechanical signal generation. At the receiver side, a piezoelement was adopted for picking up the transferred mechanical signals. Experimental results have shown that the proposed system can successfully transfer data using mechanical waves. We also validate dual-frequency actuations under which high-frequency signals (18.0-20.0 kHz) are generated along with low-frequency (up to 250 Hz) haptic vibrations. The proposed method has advantages in terms of security in that it does not reveal the signals outside the body, meaning that it is not possible for attackers to eavesdrop on the signals. To further illustrate the possible application spaces, we conclude with explorations of the proposed approach.

  12. An Embedded Sensory System for Worker Safety: Prototype Development and Evaluation

    PubMed Central

    Cho, Chunhee; Park, JeeWoong

    2018-01-01

    At a construction site, workers mainly rely on two senses, which are sight and sound, in order to perceive their physical surroundings. However, they are often hindered by the nature of most construction sites, which are usually dynamic, loud, and complicated. To overcome these challenges, this research explored a method using an embedded sensory system that might offer construction workers an artificial sensing ability to better perceive their surroundings. This study identified three parameters (i.e., intensity, signal length, and delay between consecutive pulses) needed for tactile-based signals for the construction workers to communicate quickly. We developed a prototype system based on these parameters, conducted experimental studies to quantify and validate the sensitivity of the parameters for quick communication, and analyzed test data to reveal what was added by this method in order to perceive information from the tactile signals. The findings disclosed that the parameters of tactile-based signals and their distinguishable ranges could be perceived in a short amount of time (i.e., a fraction of a second). Further experimentation demonstrated the capability of the identified unit signals combined with a signal mapping technique to effectively deliver simple information to individuals and offer an additional sense of awareness to the surroundings. The findings of this study could serve as a basis for future research in exploring advanced tactile-based messages to overcome challenges in environments for which communication is a struggle. PMID:29662008

  13. An Embedded Sensory System for Worker Safety: Prototype Development and Evaluation.

    PubMed

    Cho, Chunhee; Park, JeeWoong

    2018-04-14

    At a construction site, workers mainly rely on two senses, which are sight and sound, in order to perceive their physical surroundings. However, they are often hindered by the nature of most construction sites, which are usually dynamic, loud, and complicated. To overcome these challenges, this research explored a method using an embedded sensory system that might offer construction workers an artificial sensing ability to better perceive their surroundings. This study identified three parameters (i.e., intensity, signal length, and delay between consecutive pulses) needed for tactile-based signals for the construction workers to communicate quickly. We developed a prototype system based on these parameters, conducted experimental studies to quantify and validate the sensitivity of the parameters for quick communication, and analyzed test data to reveal what was added by this method in order to perceive information from the tactile signals. The findings disclosed that the parameters of tactile-based signals and their distinguishable ranges could be perceived in a short amount of time (i.e., a fraction of a second). Further experimentation demonstrated the capability of the identified unit signals combined with a signal mapping technique to effectively deliver simple information to individuals and offer an additional sense of awareness to the surroundings. The findings of this study could serve as a basis for future research in exploring advanced tactile-based messages to overcome challenges in environments for which communication is a struggle.

  14. Epileptic seizure detection in EEG signal using machine learning techniques.

    PubMed

    Jaiswal, Abeg Kumar; Banka, Haider

    2018-03-01

    Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.

  15. Soil Moisture Sensing Using Reflected GPS Signals: Description of the GPS Soil Moisture Product.

    NASA Astrophysics Data System (ADS)

    Larson, Kristine; Small, Eric; Chew, Clara

    2015-04-01

    As first demonstrated by the GPS reflections group in 2008, data from GPS networks can be used to monitor multiple parameters of the terrestrial water cycle. The GPS L-band signals take two paths: (1) the "direct" signal travels from the satellite to the antenna, which is typically located 2-3 meters above the ground; (2) the reflected signal interacts with the Earth's surface before traveling to the antenna. The direct signal is used by geophysicists and surveyors to measure the position of the antenna, while the effects of reflected signals are a source of error. If one focuses on the reflected signal rather than the positioning observables, one has a method that is sensitive to surface soil moisture (top 5 cm), vegetation water content, and snow depth. This method - known as GPS Interferometric Reflectometry (GPS-IR) - has a footprint of ~1000 m2 for most GPS sites. This is intermediate in scale to most in situ and satellite observations. A significant advantage of GPS-IR is that data from existing GPS networks can be used without any changes to the instrumentation. This means that there is a new source of cost-effective instrumentation for satellite validation and climate studies. This presentation will provide an overview of the GPS-IR methodology with an emphasis on the soil moisture product. GPS water cycle products are currently produced on a daily basis for a network of ~500 sites in the western United States; results are freely available at http://xenon.colorado.edu/portal. Plans to expand the GPS-IR method to the network of international GPS sites will also be discussed.

  16. Identification of the anti-tumor activity and mechanisms of nuciferine through a network pharmacology approach

    PubMed Central

    Qi, Quan; Li, Rui; Li, Hui-ying; Cao, Yu-bing; Bai, Ming; Fan, Xiao-jing; Wang, Shu-yan; Zhang, Bo; Li, Shao

    2016-01-01

    Aim: Nuciferine is an aporphine alkaloid extracted from lotus leaves, which is a raw material in Chinese medicinal herb for weight loss. In this study we used a network pharmacology approach to identify the anti-tumor activity of nuciferine and the underlying mechanisms. Methods: The pharmacological activities and mechanisms of nuciferine were identified through target profile prediction, clustering analysis and functional enrichment analysis using our traditional Chinese medicine (TCM) network pharmacology platform. The anti-tumor activity of nuciferine was validated by in vitro and in vivo experiments. The anti-tumor mechanisms of nuciferine were predicted through network target analysis and verified by in vitro experiments. Results: The nuciferine target profile was enriched with signaling pathways and biological functions, including “regulation of lipase activity”, “response to nicotine” and “regulation of cell proliferation”. Target profile clustering results suggested that nuciferine to exert anti-tumor effect. In experimental validation, nuciferine (0.8 mg/mL) markedly inhibited the viability of human neuroblastoma SY5Y cells and mouse colorectal cancer CT26 cells in vitro, and nuciferine (0.05 mg/mL) significantly suppressed the invasion of 6 cancer cell lines in vitro. Intraperitoneal injection of nuciferine (9.5 mg/mL, ip, 3 times a week for 3 weeks) significantly decreased the weight of SY5Y and CT26 tumor xenografts in nude mice. Network target analysis and experimental validation in SY5Y and CT26 cells showed that the anti-tumor effect of nuciferine was mediated through inhibiting the PI3K-AKT signaling pathway and IL-1 levels in SY5Y and CT26 cells. Conclusion: By using a TCM network pharmacology method, nuciferine is identified as an anti-tumor agent against human neuroblastoma and mouse colorectal cancer in vitro and in vivo, through inhibiting the PI3K-AKT signaling pathways and IL-1 levels. PMID:27180984

  17. Integration of Attributes from Non-Linear Characterization of Cardiovascular Time-Series for Prediction of Defibrillation Outcomes

    PubMed Central

    Shandilya, Sharad; Kurz, Michael C.; Ward, Kevin R.; Najarian, Kayvan

    2016-01-01

    Objective The timing of defibrillation is mostly at arbitrary intervals during cardio-pulmonary resuscitation (CPR), rather than during intervals when the out-of-hospital cardiac arrest (OOH-CA) patient is physiologically primed for successful countershock. Interruptions to CPR may negatively impact defibrillation success. Multiple defibrillations can be associated with decreased post-resuscitation myocardial function. We hypothesize that a more complete picture of the cardiovascular system can be gained through non-linear dynamics and integration of multiple physiologic measures from biomedical signals. Materials and Methods Retrospective analysis of 153 anonymized OOH-CA patients who received at least one defibrillation for ventricular fibrillation (VF) was undertaken. A machine learning model, termed Multiple Domain Integrative (MDI) model, was developed to predict defibrillation success. We explore the rationale for non-linear dynamics and statistically validate heuristics involved in feature extraction for model development. Performance of MDI is then compared to the amplitude spectrum area (AMSA) technique. Results 358 defibrillations were evaluated (218 unsuccessful and 140 successful). Non-linear properties (Lyapunov exponent > 0) of the ECG signals indicate a chaotic nature and validate the use of novel non-linear dynamic methods for feature extraction. Classification using MDI yielded ROC-AUC of 83.2% and accuracy of 78.8%, for the model built with ECG data only. Utilizing 10-fold cross-validation, at 80% specificity level, MDI (74% sensitivity) outperformed AMSA (53.6% sensitivity). At 90% specificity level, MDI had 68.4% sensitivity while AMSA had 43.3% sensitivity. Integrating available end-tidal carbon dioxide features into MDI, for the available 48 defibrillations, boosted ROC-AUC to 93.8% and accuracy to 83.3% at 80% sensitivity. Conclusion At clinically relevant sensitivity thresholds, the MDI provides improved performance as compared to AMSA, yielding fewer unsuccessful defibrillations. Addition of partial end-tidal carbon dioxide (PetCO2) signal improves accuracy and sensitivity of the MDI prediction model. PMID:26741805

  18. Study on Unified Chaotic System-Based Wind Turbine Blade Fault Diagnostic System

    NASA Astrophysics Data System (ADS)

    Kuo, Ying-Che; Hsieh, Chin-Tsung; Yau, Her-Terng; Li, Yu-Chung

    At present, vibration signals are processed and analyzed mostly in the frequency domain. The spectrum clearly shows the signal structure and the specific characteristic frequency band is analyzed, but the number of calculations required is huge, resulting in delays. Therefore, this study uses the characteristics of a nonlinear system to load the complete vibration signal to the unified chaotic system, applying the dynamic error to analyze the wind turbine vibration signal, and adopting extenics theory for artificial intelligent fault diagnosis of the analysis signal. Hence, a fault diagnostor has been developed for wind turbine rotating blades. This study simulates three wind turbine blade states, namely stress rupture, screw loosening and blade loss, and validates the methods. The experimental results prove that the unified chaotic system used in this paper has a significant effect on vibration signal analysis. Thus, the operating conditions of wind turbines can be quickly known from this fault diagnostic system, and the maintenance schedule can be arranged before the faults worsen, making the management and implementation of wind turbines smoother, so as to reduce many unnecessary costs.

  19. Evaluation of magnetic resonance signal modification induced by hyaluronic acid therapy in chondromalacia patellae: a preliminary study.

    PubMed

    Magarelli, N; Palmieri, D; Ottaviano, L; Savastano, M; Barbato, M; Leone, A; Maggialetti, A; Ciampa, F P; Bonomo, L

    2008-01-01

    Hyaluronic Acid (HA) is an alternative method for the treatment of osteoarthritis (OA), which acts on pain through a double action: anti-inflammatory and synovial fluid (SF) visco-supplementation. Magnetic Resonance Imaging (MRI), utilizing specific sequences, is a valid method for studying the initial phase of chondral damage. The analysis of the data, obtained through the intensity of values taken by positioning Region of Interest (ROIs) within the lesion, determining the differences before and after treatment with HA injected into the knee. The results obtained after six months and one year from the injection were statistically different in respect to those taken before, immediately and after three months of treatment. MRI represents a valid tool to evaluate the grade of chondromalacia patellae and also to follow the cartilage modification induced by HA therapy.

  20. Acoustic emission detection for mass fractions of materials based on wavelet packet technology.

    PubMed

    Wang, Xianghong; Xiang, Jianjun; Hu, Hongwei; Xie, Wei; Li, Xiongbing

    2015-07-01

    Materials are often damaged during the process of detecting mass fractions by traditional methods. Acoustic emission (AE) technology combined with wavelet packet analysis is used to evaluate the mass fractions of microcrystalline graphite/polyvinyl alcohol (PVA) composites in this study. Attenuation characteristics of AE signals across the composites with different mass fractions are investigated. The AE signals are decomposed by wavelet packet technology to obtain the relationships between the energy and amplitude attenuation coefficients of feature wavelet packets and mass fractions as well. Furthermore, the relationship is validated by a sample. The larger proportion of microcrystalline graphite will correspond to the higher attenuation of energy and amplitude. The attenuation characteristics of feature wavelet packets with the frequency range from 125 kHz to 171.85 kHz are more suitable for the detection of mass fractions than those of the original AE signals. The error of the mass fraction of microcrystalline graphite calculated by the feature wavelet packet (1.8%) is lower than that of the original signal (3.9%). Therefore, AE detection base on wavelet packet analysis is an ideal NDT method for evaluate mass fractions of composite materials. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Cerebral capillary velocimetry based on temporal OCT speckle contrast.

    PubMed

    Choi, Woo June; Li, Yuandong; Qin, Wan; Wang, Ruikang K

    2016-12-01

    We propose a new optical coherence tomography (OCT) based method to measure red blood cell (RBC) velocities of single capillaries in the cortex of rodent brain. This OCT capillary velocimetry exploits quantitative laser speckle contrast analysis to estimate speckle decorrelation rate from the measured temporal OCT speckle signals, which is related to microcirculatory flow velocity. We hypothesize that OCT signal due to sub-surface capillary flow can be treated as the speckle signal in the single scattering regime and thus its time scale of speckle fluctuations can be subjected to single scattering laser speckle contrast analysis to derive characteristic decorrelation time. To validate this hypothesis, OCT measurements are conducted on a single capillary flow phantom operating at preset velocities, in which M-mode B-frames are acquired using a high-speed OCT system. Analysis is then performed on the time-varying OCT signals extracted at the capillary flow, exhibiting a typical inverse relationship between the estimated decorrelation time and absolute RBC velocity, which is then used to deduce the capillary velocities. We apply the method to in vivo measurements of mouse brain, demonstrating that the proposed approach provides additional useful information in the quantitative assessment of capillary hemodynamics, complementary to that of OCT angiography.

  2. Hypothesis tests for the detection of constant speed radiation moving sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir

    2015-07-01

    Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less

  3. Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals

    PubMed Central

    Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu

    2012-01-01

    Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017

  4. Improvement of the fringe analysis algorithm for wavelength scanning interferometry based on filter parameter optimization.

    PubMed

    Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian

    2018-03-20

    The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.

  5. Automatic identification of epileptic seizures from EEG signals using linear programming boosting.

    PubMed

    Hassan, Ahnaf Rashik; Subasi, Abdulhamit

    2016-11-01

    Computerized epileptic seizure detection is essential for expediting epilepsy diagnosis and research and for assisting medical professionals. Moreover, the implementation of an epilepsy monitoring device that has low power and is portable requires a reliable and successful seizure detection scheme. In this work, the problem of automated epilepsy seizure detection using singe-channel EEG signals has been addressed. At first, segments of EEG signals are decomposed using a newly proposed signal processing scheme, namely complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). Six spectral moments are extracted from the CEEMDAN mode functions and train and test matrices are formed afterward. These matrices are fed into the classifier to identify epileptic seizures from EEG signal segments. In this work, we implement an ensemble learning based machine learning algorithm, namely linear programming boosting (LPBoost) to perform classification. The efficacy of spectral features in the CEEMDAN domain is validated by graphical and statistical analyses. The performance of CEEMDAN is compared to those of its predecessors to further inspect its suitability. The effectiveness and the appropriateness of LPBoost are demonstrated as opposed to the commonly used classification models. Resubstitution and 10 fold cross-validation error analyses confirm the superior algorithm performance of the proposed scheme. The algorithmic performance of our epilepsy seizure identification scheme is also evaluated against state-of-the-art works in the literature. Experimental outcomes manifest that the proposed seizure detection scheme performs better than the existing works in terms of accuracy, sensitivity, specificity, and Cohen's Kappa coefficient. It can be anticipated that owing to its use of only one channel of EEG signal, the proposed method will be suitable for device implementation, eliminate the onus of clinicians for analyzing a large bulk of data manually, and expedite epilepsy diagnosis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Weak Defect Identification for Centrifugal Compressor Blade Crack Based on Pressure Sensors and Genetic Algorithm.

    PubMed

    Li, Hongkun; He, Changbo; Malekian, Reza; Li, Zhixiong

    2018-04-19

    The Centrifugal compressor is a piece of key equipment for petrochemical factories. As the core component of a compressor, the blades suffer periodic vibration and flow induced excitation mechanism, which will lead to the occurrence of crack defect. Moreover, the induced blade defect usually has a serious impact on the normal operation of compressors and the safety of operators. Therefore, an effective blade crack identification method is particularly important for the reliable operation of compressors. Conventional non-destructive testing and evaluation (NDT&E) methods can detect the blade defect effectively, however, the compressors should shut down during the testing process which is time-consuming and costly. In addition, it can be known these methods are not suitable for the long-term on-line condition monitoring and cannot identify the blade defect in time. Therefore, the effective on-line condition monitoring and weak defect identification method should be further studied and proposed. Considering the blade vibration information is difficult to measure directly, pressure sensors mounted on the casing are used to sample airflow pressure pulsation signal on-line near the rotating impeller for the purpose of monitoring the blade condition indirectly in this paper. A big problem is that the blade abnormal vibration amplitude induced by the crack is always small and this feature information will be much weaker in the pressure signal. Therefore, it is usually difficult to identify blade defect characteristic frequency embedded in pressure pulsation signal by general signal processing methods due to the weakness of the feature information and the interference of strong noise. In this paper, continuous wavelet transform (CWT) is used to pre-process the sampled signal first. Then, the method of bistable stochastic resonance (SR) based on Woods-Saxon and Gaussian (WSG) potential is applied to enhance the weak characteristic frequency contained in the pressure pulsation signal. Genetic algorithm (GA) is used to obtain optimal parameters for this SR system to improve its feature enhancement performance. The analysis result of experimental signal shows the validity of the proposed method for the enhancement and identification of weak defect characteristic. In the end, strain test is carried out to further verify the accuracy and reliability of the analysis result obtained by pressure pulsation signal.

  7. Weak Defect Identification for Centrifugal Compressor Blade Crack Based on Pressure Sensors and Genetic Algorithm

    PubMed Central

    Li, Hongkun; He, Changbo

    2018-01-01

    The Centrifugal compressor is a piece of key equipment for petrochemical factories. As the core component of a compressor, the blades suffer periodic vibration and flow induced excitation mechanism, which will lead to the occurrence of crack defect. Moreover, the induced blade defect usually has a serious impact on the normal operation of compressors and the safety of operators. Therefore, an effective blade crack identification method is particularly important for the reliable operation of compressors. Conventional non-destructive testing and evaluation (NDT&E) methods can detect the blade defect effectively, however, the compressors should shut down during the testing process which is time-consuming and costly. In addition, it can be known these methods are not suitable for the long-term on-line condition monitoring and cannot identify the blade defect in time. Therefore, the effective on-line condition monitoring and weak defect identification method should be further studied and proposed. Considering the blade vibration information is difficult to measure directly, pressure sensors mounted on the casing are used to sample airflow pressure pulsation signal on-line near the rotating impeller for the purpose of monitoring the blade condition indirectly in this paper. A big problem is that the blade abnormal vibration amplitude induced by the crack is always small and this feature information will be much weaker in the pressure signal. Therefore, it is usually difficult to identify blade defect characteristic frequency embedded in pressure pulsation signal by general signal processing methods due to the weakness of the feature information and the interference of strong noise. In this paper, continuous wavelet transform (CWT) is used to pre-process the sampled signal first. Then, the method of bistable stochastic resonance (SR) based on Woods-Saxon and Gaussian (WSG) potential is applied to enhance the weak characteristic frequency contained in the pressure pulsation signal. Genetic algorithm (GA) is used to obtain optimal parameters for this SR system to improve its feature enhancement performance. The analysis result of experimental signal shows the validity of the proposed method for the enhancement and identification of weak defect characteristic. In the end, strain test is carried out to further verify the accuracy and reliability of the analysis result obtained by pressure pulsation signal. PMID:29671821

  8. Novel Tool for Complete Digitization of Paper Electrocardiography Data

    PubMed Central

    Harless, Chris; Shah, Amit J.; Wick, Carson A.; Mcclellan, James H.

    2013-01-01

    Objective: We present a Matlab-based tool to convert electrocardiography (ECG) information from paper charts into digital ECG signals. The tool can be used for long-term retrospective studies of cardiac patients to study the evolving features with prognostic value. Methods and procedures: To perform the conversion, we: 1) detect the graphical grid on ECG charts using grayscale thresholding; 2) digitize the ECG signal based on its contour using a column-wise pixel scan; and 3) use template-based optical character recognition to extract patient demographic information from the paper ECG in order to interface the data with the patients' medical record. To validate the digitization technique: 1) correlation between the digital signals and signals digitized from paper ECG are performed and 2) clinically significant ECG parameters are measured and compared from both the paper-based ECG signals and the digitized ECG. Results: The validation demonstrates a correlation value of 0.85–0.9 between the digital ECG signal and the signal digitized from the paper ECG. There is a high correlation in the clinical parameters between the ECG information from the paper charts and digitized signal, with intra-observer and inter-observer correlations of 0.8–0.9 \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$({\\rm p}<{0.05})$\\end{document}, and kappa statistics ranging from 0.85 (inter-observer) to 1.00 (intra-observer). Conclusion: The important features of the ECG signal, especially the QRST complex and the associated intervals, are preserved by obtaining the contour from the paper ECG. The differences between the measures of clinically important features extracted from the original signal and the reconstructed signal are insignificant, thus highlighting the accuracy of this technique. Clinical impact: Using this type of ECG digitization tool to carry out retrospective studies on large databases, which rely on paper ECG records, studies of emerging ECG features can be performed. In addition, this tool can be used to potentially integrate digitized ECG information with digital ECG analysis programs and with the patient's electronic medical record. PMID:26594601

  9. Joint estimation of subject motion and tracer kinetic parameters of dynamic PET data in an EM framework

    NASA Astrophysics Data System (ADS)

    Jiao, Jieqing; Salinas, Cristian A.; Searle, Graham E.; Gunn, Roger N.; Schnabel, Julia A.

    2012-02-01

    Dynamic Positron Emission Tomography is a powerful tool for quantitative imaging of in vivo biological processes. The long scan durations necessitate motion correction, to maintain the validity of the dynamic measurements, which can be particularly challenging due to the low signal-to-noise ratio (SNR) and spatial resolution, as well as the complex tracer behaviour in the dynamic PET data. In this paper we develop a novel automated expectation-maximisation image registration framework that incorporates temporal tracer kinetic information to correct for inter-frame subject motion during dynamic PET scans. We employ the Zubal human brain phantom to simulate dynamic PET data using SORTEO (a Monte Carlo-based simulator), in order to validate the proposed method for its ability to recover imposed rigid motion. We have conducted a range of simulations using different noise levels, and corrupted the data with a range of rigid motion artefacts. The performance of our motion correction method is compared with pairwise registration using normalised mutual information as a voxel similarity measure (an approach conventionally used to correct for dynamic PET inter-frame motion based solely on intensity information). To quantify registration accuracy, we calculate the target registration error across the images. The results show that our new dynamic image registration method based on tracer kinetics yields better realignment of the simulated datasets, halving the target registration error when compared to the conventional method at small motion levels, as well as yielding smaller residuals in translation and rotation parameters. We also show that our new method is less affected by the low signal in the first few frames, which the conventional method based on normalised mutual information fails to realign.

  10. A validated silver-nanoparticle-enhanced chemiluminescence method for the determination of citalopram in pharmaceutical preparations and human plasma.

    PubMed

    Khan, Muhammad Naeem; Jan, Muhammad Rasul; Shah, Jasmin; Lee, Sang Hak

    2014-05-01

    A simple and sensitive chemiluminescence (CL) method was developed for the determination of citalopram in pharmaceutical preparations and human plasma. The method is based on the enhancement of the weak CL signal of the luminol-H2 O2 system. It was found that the CL signal arising from the reaction between alkaline luminol and H2 O2 was greatly increased by the addition of silver nanoparticles in the presence of citalopram. Prepared silver nanoparticles (AgNPs) were characterized by UV-visible spectroscopy and transmission electron microscopy (TEM). Various experimental parameters affecting CL intensity were studied and optimized for the determination of citalopram. Under optimized experimental conditions, CL intensity was found to be proportional to the concentration of citalopram in the range 40-2500 ng/mL, with a correlation coefficient of 0.9997. The limit of detection (LOD) and limit of quantification (LOQ) of the devised method were 3.78 and 12.62 ng/mL, respectively. Furthermore, the developed method was found to have excellent reproducibility with a relative standard deviation (RSD) of 3.65% (n = 7). Potential interference by common excipients was also studied. The method was validated statistically using recovery studies and was successfully applied to the determination of citalopram in the pure form, in pharmaceutical preparations and in spiked human plasma samples. Percentage recoveries were found to range from 97.71 to 101.99% for the pure form, from 97.84 to 102.78% for pharmaceutical preparations and from 95.65 to 100.35% for spiked human plasma. Copyright © 2013 John Wiley & Sons, Ltd.

  11. T1 mapping with the variable flip angle technique: A simple correction for insufficient spoiling of transverse magnetization.

    PubMed

    Baudrexel, Simon; Nöth, Ulrike; Schüre, Jan-Rüdiger; Deichmann, Ralf

    2018-06-01

    The variable flip angle method derives T 1 maps from radiofrequency-spoiled gradient-echo data sets, acquired with different flip angles α. Because the method assumes validity of the Ernst equation, insufficient spoiling of transverse magnetization yields errors in T 1 estimation, depending on the chosen radiofrequency-spoiling phase increment (Δϕ). This paper presents a versatile correction method that uses modified flip angles α' to restore the validity of the Ernst equation. Spoiled gradient-echo signals were simulated for three commonly used phase increments Δϕ (50°/117°/150°), different values of α, repetition time (TR), T 1 , and a T 2 of 85 ms. For each parameter combination, α' (for which the Ernst equation yielded the same signal) and a correction factor C Δϕ (α, TR, T 1 ) = α'/α were determined. C Δϕ was found to be independent of T 1 and fitted as polynomial C Δϕ (α, TR), allowing to calculate α' for any protocol using this Δϕ. The accuracy of the correction method for T 2 values deviating from 85 ms was also determined. The method was tested in vitro and in vivo for variable flip angle scans with different acquisition parameters. The technique considerably improved the accuracy of variable flip angle-based T 1 maps in vitro and in vivo. The proposed method allows for a simple correction of insufficient spoiling in gradient-echo data. The required polynomial parameters are supplied for three common Δϕ. Magn Reson Med 79:3082-3092, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Capillary red blood cell velocimetry by phase-resolved optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Tang, Jianbo; Erdener, Sefik Evren; Fu, Buyin; Boas, David A.

    2018-02-01

    Quantitative measurement of blood flow velocity in capillaries is challenging due to their small size (around 5-10 μm), and the discontinuity and single-file feature of RBCs flowing in a capillary. In this work, we present a phase-resolved Optical Coherence Tomography (OCT) method for accurate measurement of the red blood cell (RBC) speed in cerebral capillaries. To account for the discontinuity of RBCs flowing in capillaries, we applied an M-mode scanning strategy that repeated A-scans at each scanning position for an extended time. As the capillary size is comparable to the OCT resolution size (3.5×3.5×3.5μm), we applied a high pass filter to remove the stationary signal component so that the phase information of the dynamic component (i.e. from the moving RBC) could be enhanced to provide an accurate estimate of the RBC axial speed. The phase-resolved OCT method accurately quantifies the axial velocity of RBC's from the phase shift of the dynamic component of the signal. We validated our measurements by RBC passage velocimetry using the signal magnitude of the same OCT time series data. These proposed method of capillary velocimetry proved to be a robust method of mapping capillary RBC speeds across the micro-vascular network.

  13. Accurate determination of brain metabolite concentrations using ERETIC as external reference.

    PubMed

    Zoelch, Niklaus; Hock, Andreas; Heinzer-Schweizer, Susanne; Avdievitch, Nikolai; Henning, Anke

    2017-08-01

    Magnetic Resonance Spectroscopy (MRS) can provide in vivo metabolite concentrations in standard concentration units if a reliable reference signal is available. For 1 H MRS in the human brain, typically the signal from the tissue water is used as the (internal) reference signal. However, a concentration determination based on the tissue water signal most often requires a reliable estimate of the water concentration present in the investigated tissue. Especially in clinically interesting cases, this estimation might be difficult. To avoid assumptions about the water in the investigated tissue, the Electric REference To access In vivo Concentrations (ERETIC) method has been proposed. In this approach, the metabolite signal is compared with a reference signal acquired in a phantom and potential coil-loading differences are corrected using a synthetic reference signal. The aim of this study, conducted with a transceiver quadrature head coil, was to increase the accuracy of the ERETIC method by correcting the influence of spatial B 1 inhomogeneities and to simplify the quantification with ERETIC by incorporating an automatic phase correction for the ERETIC signal. Transmit field ( B1+) differences are minimized with a volume-selective power optimization, whereas reception sensitivity changes are corrected using contrast-minimized images of the brain and by adapting the voxel location in the phantom measurement closely to the position measured in vivo. By applying the proposed B 1 correction scheme, the mean metabolite concentrations determined with ERETIC in 21 healthy subjects at three different positions agree with concentrations derived with the tissue water signal as reference. In addition, brain water concentrations determined with ERETIC were in agreement with estimations derived using tissue segmentation and literature values for relative water densities. Based on the results, the ERETIC method presented here is a valid tool to derive in vivo metabolite concentration, with potential advantages compared with internal water referencing in diseased tissue. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Photoacoustic signals denoising of the glucose aqueous solutions using an improved wavelet threshold method

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Xiong, Zhihua

    2016-10-01

    The photoacoustic signals denoising of glucose is one of most important steps in the quality identification of the fruit because the real-time photoacoustic singals of glucose are easily interfered by all kinds of noises. To remove the noises and some useless information, an improved wavelet threshld function were proposed. Compared with the traditional wavelet hard and soft threshold functions, the improved wavelet threshold function can overcome the pseudo-oscillation effect of the denoised photoacoustic signals due to the continuity of the improved wavelet threshold function, and the error between the denoised signals and the original signals can be decreased. To validate the feasibility of the improved wavelet threshold function denoising, the denoising simulation experiments based on MATLAB programmimg were performed. In the simulation experiments, the standard test signal was used, and three different denoising methods were used and compared with the improved wavelet threshold function. The signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) values were used to evaluate the performance of the improved wavelet threshold function denoising. The experimental results demonstrate that the SNR value of the improved wavelet threshold function is largest and the RMSE value is lest, which fully verifies that the improved wavelet threshold function denoising is feasible. Finally, the improved wavelet threshold function denoising was used to remove the noises of the photoacoustic signals of the glucose solutions. The denoising effect is also very good. Therefore, the improved wavelet threshold function denoising proposed by this paper, has a potential value in the field of denoising for the photoacoustic singals.

  15. Nonlinear Blind Compensation for Array Signal Processing Application

    PubMed Central

    Ma, Hong; Jin, Jiang; Zhang, Hua

    2018-01-01

    Recently, nonlinear blind compensation technique has attracted growing attention in array signal processing application. However, due to the nonlinear distortion stemming from array receiver which consists of multi-channel radio frequency (RF) front-ends, it is too difficult to estimate the parameters of array signal accurately. A novel nonlinear blind compensation algorithm aims at the nonlinearity mitigation of array receiver and its spurious-free dynamic range (SFDR) improvement, which will be more precise to estimate the parameters of target signals such as their two-dimensional directions of arrival (2-D DOAs). Herein, the suggested method is designed as follows: the nonlinear model parameters of any channel of RF front-end are extracted to synchronously compensate the nonlinear distortion of the entire receiver. Furthermore, a verification experiment on the array signal from a uniform circular array (UCA) is adopted to testify the validity of our approach. The real-world experimental results show that the SFDR of the receiver is enhanced, leading to a significant improvement of the 2-D DOAs estimation performance for weak target signals. And these results demonstrate that our nonlinear blind compensation algorithm is effective to estimate the parameters of weak array signal in concomitance with strong jammers. PMID:29690571

  16. Halo-independent determination of the unmodulated WIMP signal in DAMA: the isotropic case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gondolo, Paolo; Scopel, Stefano, E-mail: paolo.gondolo@utah.edu, E-mail: scopel@sogang.ac.kr

    2017-09-01

    We present a halo-independent determination of the unmodulated signal corresponding to the DAMA modulation if interpreted as due to dark matter weakly interacting massive particles (WIMPs). First we show how a modulated signal gives information on the WIMP velocity distribution function in the Galactic rest frame from which the unmodulated signal descends. Then we describe a mathematically-sound profile likelihood analysis in which the likelihood is profiled over a continuum of nuisance parameters (namely, the WIMP velocity distribution). As a first application of the method, which is very general and valid for any class of velocity distributions, we restrict the analysismore » to velocity distributions that are isotropic in the Galactic frame. In this way we obtain halo-independent maximum-likelihood estimates and confidence intervals for the DAMA unmodulated signal. We find that the estimated unmodulated signal is in line with expectations for a WIMP-induced modulation and is compatible with the DAMA background+signal rate. Specifically, for the isotropic case we find that the modulated amplitude ranges between a few percent and about 25% of the unmodulated amplitude, depending on the WIMP mass.« less

  17. Integration of new biological and physical retrospective dosimetry methods into EU emergency response plans - joint RENEB and EURADOS inter-laboratory comparisons.

    PubMed

    Ainsbury, Elizabeth; Badie, Christophe; Barnard, Stephen; Manning, Grainne; Moquet, Jayne; Abend, Michael; Antunes, Ana Catarina; Barrios, Lleonard; Bassinet, Celine; Beinke, Christina; Bortolin, Emanuela; Bossin, Lily; Bricknell, Clare; Brzoska, Kamil; Buraczewska, Iwona; Castaño, Carlos Huertas; Čemusová, Zina; Christiansson, Maria; Cordero, Santiago Mateos; Cosler, Guillaume; Monaca, Sara Della; Desangles, François; Discher, Michael; Dominguez, Inmaculada; Doucha-Senf, Sven; Eakins, Jon; Fattibene, Paola; Filippi, Silvia; Frenzel, Monika; Georgieva, Dimka; Gregoire, Eric; Guogyte, Kamile; Hadjidekova, Valeria; Hadjiiska, Ljubomira; Hristova, Rositsa; Karakosta, Maria; Kis, Enikő; Kriehuber, Ralf; Lee, Jungil; Lloyd, David; Lumniczky, Katalin; Lyng, Fiona; Macaeva, Ellina; Majewski, Matthaeus; Vanda Martins, S; McKeever, Stephen W S; Meade, Aidan; Medipally, Dinesh; Meschini, Roberta; M'kacher, Radhia; Gil, Octávia Monteiro; Montero, Alegria; Moreno, Mercedes; Noditi, Mihaela; Oestreicher, Ursula; Oskamp, Dominik; Palitti, Fabrizio; Palma, Valentina; Pantelias, Gabriel; Pateux, Jerome; Patrono, Clarice; Pepe, Gaetano; Port, Matthias; Prieto, María Jesús; Quattrini, Maria Cristina; Quintens, Roel; Ricoul, Michelle; Roy, Laurence; Sabatier, Laure; Sebastià, Natividad; Sholom, Sergey; Sommer, Sylwester; Staynova, Albena; Strunz, Sonja; Terzoudi, Georgia; Testa, Antonella; Trompier, Francois; Valente, Marco; Hoey, Olivier Van; Veronese, Ivan; Wojcik, Andrzej; Woda, Clemens

    2017-01-01

    RENEB, 'Realising the European Network of Biodosimetry and Physical Retrospective Dosimetry,' is a network for research and emergency response mutual assistance in biodosimetry within the EU. Within this extremely active network, a number of new dosimetry methods have recently been proposed or developed. There is a requirement to test and/or validate these candidate techniques and inter-comparison exercises are a well-established method for such validation. The authors present details of inter-comparisons of four such new methods: dicentric chromosome analysis including telomere and centromere staining; the gene expression assay carried out in whole blood; Raman spectroscopy on blood lymphocytes, and detection of radiation-induced thermoluminescent signals in glass screens taken from mobile phones. In general the results show good agreement between the laboratories and methods within the expected levels of uncertainty, and thus demonstrate that there is a lot of potential for each of the candidate techniques. Further work is required before the new methods can be included within the suite of reliable dosimetry methods for use by RENEB partners and others in routine and emergency response scenarios.

  18. Locating low-frequency earthquakes using amplitude signals from seismograph stations: Examples from events at Montserrat, West Indies and from synthetic data

    NASA Astrophysics Data System (ADS)

    Jolly, A.; Jousset, P.; Neuberg, J.

    2003-04-01

    We determine locations for low-frequency earthquakes occurring prior to a collapse on June 25th, 1997 using signal amplitudes from a 7-station local seismograph network at the Soufriere Hills volcano on Montserrat, West Indies. Locations are determined by averaging the signal amplitude over the event waveform and inverting these data using an assumed amplitude decay model comprising geometrical spreading and attenuation. Resulting locations are centered beneath the active dome from 500 to 2000 m below sea level assuming body wave geometrical spreading and a quality factor of Q=22. Locations for the same events shifted systematically shallower by about 500 m assuming a surface wave geometrical spreading. Locations are consistent to results obtained using arrival time methods. The validity of the method is tested against synthetic low-frequency events constructed from a 2-D finite difference model including visco-elastic properties. Two example events are tested; one from a point source triggered in a low velocity conduit ranging between 100-1100 m below the surface, and the second triggered in a conduit located 1500-2500 m below the surface. Resulting seismograms have emergent onsets and extended codas and include the effect of conduit resonance. Employing geometrical spreading and attenuation from the finite-difference modelling, we obtain locations within the respective model conduits validating our approach.The location depths are sensitive to the assumed geometric spreading and Q model. We can distinguish between two sources separated by about 1000 meters only if we know the decay parameters.

  19. Comparison of two weighted integration models for the cueing task: linear and likelihood

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2003-01-01

    In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.

  20. A machine learning approach to multi-level ECG signal quality classification.

    PubMed

    Li, Qiao; Rajagopalan, Cadathur; Clifford, Gari D

    2014-12-01

    Current electrocardiogram (ECG) signal quality assessment studies have aimed to provide a two-level classification: clean or noisy. However, clinical usage demands more specific noise level classification for varying applications. This work outlines a five-level ECG signal quality classification algorithm. A total of 13 signal quality metrics were derived from segments of ECG waveforms, which were labeled by experts. A support vector machine (SVM) was trained to perform the classification and tested on a simulated dataset and was validated using data from the MIT-BIH arrhythmia database (MITDB). The simulated training and test datasets were created by selecting clean segments of the ECG in the 2011 PhysioNet/Computing in Cardiology Challenge database, and adding three types of real ECG noise at different signal-to-noise ratio (SNR) levels from the MIT-BIH Noise Stress Test Database (NSTDB). The MITDB was re-annotated for five levels of signal quality. Different combinations of the 13 metrics were trained and tested on the simulated datasets and the best combination that produced the highest classification accuracy was selected and validated on the MITDB. Performance was assessed using classification accuracy (Ac), and a single class overlap accuracy (OAc), which assumes that an individual type classified into an adjacent class is acceptable. An Ac of 80.26% and an OAc of 98.60% on the test set were obtained by selecting 10 metrics while 57.26% (Ac) and 94.23% (OAc) were the numbers for the unseen MITDB validation data without retraining. By performing the fivefold cross validation, an Ac of 88.07±0.32% and OAc of 99.34±0.07% were gained on the validation fold of MITDB. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    PubMed

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  2. Fault diagnosis of rolling element bearing using a new optimal scale morphology analysis method.

    PubMed

    Yan, Xiaoan; Jia, Minping; Zhang, Wan; Zhu, Lin

    2018-02-01

    Periodic transient impulses are key indicators of rolling element bearing defects. Efficient acquisition of impact impulses concerned with the defects is of much concern to the precise detection of bearing defects. However, transient features of rolling element bearing are generally immersed in stochastic noise and harmonic interference. Therefore, in this paper, a new optimal scale morphology analysis method, named adaptive multiscale combination morphological filter-hat transform (AMCMFH), is proposed for rolling element bearing fault diagnosis, which can both reduce stochastic noise and reserve signal details. In this method, firstly, an adaptive selection strategy based on the feature energy factor (FEF) is introduced to determine the optimal structuring element (SE) scale of multiscale combination morphological filter-hat transform (MCMFH). Subsequently, MCMFH containing the optimal SE scale is applied to obtain the impulse components from the bearing vibration signal. Finally, fault types of bearing are confirmed by extracting the defective frequency from envelope spectrum of the impulse components. The validity of the proposed method is verified through the simulated analysis and bearing vibration data derived from the laboratory bench. Results indicate that the proposed method has a good capability to recognize localized faults appeared on rolling element bearing from vibration signal. The study supplies a novel technique for the detection of faulty bearing. Copyright © 2018. Published by Elsevier Ltd.

  3. A systematic study of the effect of low pH acid treatment on anti-drug antibodies specific for a domain antibody therapeutic: Impact on drug tolerance, assay sensitivity and post-validation method assessment of ADA in clinical serum samples.

    PubMed

    Kavita, Uma; Duo, Jia; Crawford, Sean M; Liu, Rong; Valcin, Joan; Gleason, Carol; Dong, Huijin; Gadkari, Snaehal; Dodge, Robert W; Pillutla, Renuka C; DeSilva, Binodh S

    2017-09-01

    We developed a homogeneous bridging anti-drug antibody (ADA) assay on an electro chemiluminescent immunoassay (ECLIA) platform to support the immunogenicity evaluation of a dimeric domain antibody (dAb) therapeutic in clinical studies. During method development we evaluated the impact of different types of acid at various pH levels on polyclonal and monoclonal ADA controls of differing affinities and on/off rates. The data shows for the first time that acids of different pH can have a differential effect on ADA of various affinities and this in turn impacts assay sensitivity and drug tolerance as defined by these surrogate controls. Acid treatment led to a reduction in signal of intermediate and low affinity ADA, but not high affinity or polyclonal ADA. We also found that acid pretreatment is a requisite for dissociation of drug bound high affinity ADA, but not for low affinity ADA-drug complexes. Although we were unable to identify an acid that would allow a 100% retrieval of ADA signal post-treatment, use of glycine pH3.0 enabled the detection of low, intermediate and high affinity antibodies (Abs) to various extents. Following optimization, the ADA assay method was validated for clinical sample analysis. Consistencies within various parameters of the clinical data such as dose dependent increases in ADA rates and titers were observed, indicating a reliable ADA method. Pre- and post-treatment ADA negative or positive clinical samples without detectable drug were reanalyzed in the absence of acid treatment or presence of added exogenous drug respectively to further assess the effectiveness of the final acid treatment procedure. The overall ADA results indicate that assay conditions developed and validated based on surrogate controls sufficed to provide a reliable clinical data set. The effect of low pH acid treatment on possible pre-existing ADA or soluble multimeric target in normal human serum was also evaluated, and preliminary data indicate that acid type and pH also affect drug-specific signal differentially in individual samples. The results presented here represent the most extensive analyses to date on acid treatment of a wide range of ADA affinities to explore sensitivity and drug tolerance issues. They have led to a refinement of our current best practices for ADA method development and provide a depth of data to interrogate low pH mediated immune complex dissociation. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Validated LC–MS-MS Method for Multiresidual Analysis of 13 Illicit Phenethylamines in Amniotic Fluid

    PubMed Central

    Burrai, Lucia; Nieddu, Maria; Carta, Antonio; Trignano, Claudia; Sanna, Raimonda; Boatto, Gianpiero

    2016-01-01

    A multi-residue analytical method was developed for the determination in amniotic fluid (AF) of 13 illicit phenethylamines, including 12 compounds never investigated in this matrix before. Samples were subject to solid-phase extraction using; hydrophilic–lipophilic balance cartridges which gave good recoveries and low matrix effects on analysis of the extracts. The quantification was performed by liquid chromatography electrospray tandem mass spectrometry. The water–acetonitrile mobile phase containing 0.1% formic acid, used with a C18 reversed phase column, provided adequate separation, resolution and signal-to-noise ratio for the analytes and the internal standard. The final optimized method was validated according to international guidelines. A monitoring campaign to assess fetal exposure to these 13 substances of abuse has been performed on AF test samples obtained from pregnant women. All mothers (n = 194) reported no use of drugs of abuse during pregnancy, and this was confirmed by the analytical data. PMID:26755540

  5. Validation of powder X-ray diffraction following EN ISO/IEC 17025.

    PubMed

    Eckardt, Regina; Krupicka, Erik; Hofmeister, Wolfgang

    2012-05-01

    Powder X-ray diffraction (PXRD) is used widely in forensic science laboratories with the main focus of qualitative phase identification. Little is found in literature referring to the topic of validation of PXRD in the field of forensic sciences. According to EN ISO/IEC 17025, the method has to be tested for several parameters. Trueness, specificity, and selectivity of PXRD were tested using certified reference materials or a combination thereof. All three tested parameters showed the secure performance of the method. Sample preparation errors were simulated to evaluate the robustness of the method. These errors were either easily detected by the operator or nonsignificant for phase identification. In case of the detection limit, a statistical evaluation of the signal-to-noise ratio showed that a peak criterion of three sigma is inadequate and recommendations for a more realistic peak criterion are given. Finally, the results of an international proficiency test showed the secure performance of PXRD. © 2012 American Academy of Forensic Sciences.

  6. Generalized ISAR--part II: interferometric techniques for three-dimensional location of scatterers.

    PubMed

    Given, James A; Schmidt, William R

    2005-11-01

    This paper is the second part of a study dedicated to optimizing diagnostic inverse synthetic aperture radar (ISAR) studies of large naval vessels. The method developed here provides accurate determination of the position of important radio-frequency scatterers by combining accurate knowledge of ship position and orientation with specialized signal processing. The method allows for the simultaneous presence of substantial Doppler returns from both change of roll angle and change of aspect angle by introducing generalized ISAR ates. The first paper provides two modes of interpreting ISAR plots, one valid when roll Doppler is dominant, the other valid when the aspect angle Doppler is dominant. Here, we provide, for each type of ISAR plot technique, a corresponding interferometric ISAR (InSAR) technique. The former, aspect-angle dominated InSAR, is a generalization of standard InSAR; the latter, roll-angle dominated InSAR, seems to be new to this work. Both methods are shown to be efficient at identifying localized scatterers under simulation conditions.

  7. Nuclear magnetic resonance signal dynamics of liquids in the presence of distant dipolar fields, revisited

    PubMed Central

    Barros, Wilson; Gochberg, Daniel F.; Gore, John C.

    2009-01-01

    The description of the nuclear magnetic resonance magnetization dynamics in the presence of long-range dipolar interactions, which is based upon approximate solutions of Bloch–Torrey equations including the effect of a distant dipolar field, has been revisited. New experiments show that approximate analytic solutions have a broader regime of validity as well as dependencies on pulse-sequence parameters that seem to have been overlooked. In order to explain these experimental results, we developed a new method consisting of calculating the magnetization via an iterative formalism where both diffusion and distant dipolar field contributions are treated as integral operators incorporated into the Bloch–Torrey equations. The solution can be organized as a perturbative series, whereby access to higher order terms allows one to set better boundaries on validity regimes for analytic first-order approximations. Finally, the method legitimizes the use of simple analytic first-order approximations under less demanding experimental conditions, it predicts new pulse-sequence parameter dependencies for the range of validity, and clarifies weak points in previous calculations. PMID:19425789

  8. A universal approach to determine footfall timings from kinematics of a single foot marker in hoofed animals

    PubMed Central

    Clayton, Hilary M.

    2015-01-01

    The study of animal movement commonly requires the segmentation of continuous data streams into individual strides. The use of forceplates and foot-mounted accelerometers readily allows the detection of the foot-on and foot-off events that define a stride. However, when relying on optical methods such as motion capture, there is lack of validated robust, universally applicable stride event detection methods. To date, no method has been validated for movement on a circle, while algorithms are commonly specific to front/hind limbs or gait. In this study, we aimed to develop and validate kinematic stride segmentation methods applicable to movement on straight line and circle at walk and trot, which exclusively rely on a single, dorsal hoof marker. The advantage of such marker placement is the robustness to marker loss and occlusion. Eight horses walked and trotted on a straight line and in a circle over an array of multiple forceplates. Kinetic events were detected based on the vertical force profile and used as the reference values. Kinematic events were detected based on displacement, velocity or acceleration signals of the dorsal hoof marker depending on the algorithm using (i) defined thresholds associated with derived movement signals and (ii) specific events in the derived movement signals. Method comparison was performed by calculating limits of agreement, accuracy, between-horse precision and within-horse precision based on differences between kinetic and kinematic event. In addition, we examined the effect of force thresholds ranging from 50 to 150 N on the timings of kinetic events. The two approaches resulted in very good and comparable performance: of the 3,074 processed footfall events, 95% of individual foot on and foot off events differed by no more than 26 ms from the kinetic event, with average accuracy between −11 and 10 ms and average within- and between horse precision ≤8 ms. While the event-based method may be less likely to suffer from scaling effects, on soft ground the threshold-based method may prove more valuable. While we found that use of velocity thresholds for foot on detection results in biased event estimates for the foot on the inside of the circle at trot, adjusting thresholds for this condition negated the effect. For the final four algorithms, we found no noteworthy bias between conditions or between front- and hind-foot timings. Different force thresholds in the range of 50 to 150 N had the greatest systematic effect on foot-off estimates in the hind limbs (up to on average 16 ms per condition), being greater than the effect on foot-on estimates or foot-off estimates in the forelimbs (up to on average ±7 ms per condition). PMID:26157641

  9. Synthesis of vibroarthrographic signals in knee osteoarthritis diagnosis training.

    PubMed

    Shieh, Chin-Shiuh; Tseng, Chin-Dar; Chang, Li-Yun; Lin, Wei-Chun; Wu, Li-Fu; Wang, Hung-Yu; Chao, Pei-Ju; Chiu, Chien-Liang; Lee, Tsair-Fwu

    2016-07-19

    Vibroarthrographic (VAG) signals are used as useful indicators of knee osteoarthritis (OA) status. The objective was to build a template database of knee crepitus sounds. Internships can practice in the template database to shorten the time of training for diagnosis of OA. A knee sound signal was obtained using an innovative stethoscope device with a goniometer. Each knee sound signal was recorded with a Kellgren-Lawrence (KL) grade. The sound signal was segmented according to the goniometer data. The signal was Fourier transformed on the correlated frequency segment. An inverse Fourier transform was performed to obtain the time-domain signal. Haar wavelet transform was then done. The median and mean of the wavelet coefficients were chosen to inverse transform the synthesized signal in each KL category. The quality of the synthesized signal was assessed by a clinician. The sample signals were evaluated using different algorithms (median and mean). The accuracy rate of the median coefficient algorithm (93 %) was better than the mean coefficient algorithm (88 %) for cross-validation by a clinician using synthesis of VAG. The artificial signal we synthesized has the potential to build a learning system for medical students, internships and para-medical personnel for the diagnosis of OA. Therefore, our method provides a feasible way to evaluate crepitus sounds that may assist in the diagnosis of knee OA.

  10. Generation of a high-fidelity antibody against nerve growth factor using library scanning mutagenesis and validation with structures of the initial and optimized Fab-antigen complexes

    PubMed Central

    La Porte, Sherry L; Eigenbrot, Charles; Ultsch, Mark; Ho, Wei-Hsien; Foletti, Davide; Forgie, Alison; Lindquist, Kevin C; Shelton, David L; Pons, Jaume

    2014-01-01

    Nerve growth factor (NGF) is indispensable during normal embryonic development and critical for the amplification of pain signals in adults. Intervention in NGF signaling holds promise for the alleviation of pain resulting from human diseases such as osteoarthritis, cancer and chronic lower back disorders. We developed a fast, high-fidelity method to convert a hybridoma-derived NGF-targeted mouse antibody into a clinical candidate. This method, termed Library Scanning Mutagenesis (LSM), resulted in the ultra-high affinity antibody tanezumab, a first-in-class anti-hyperalgesic specific for an NGF epitope. Functional and structural comparisons between tanezumab and the mouse 911 precursor antibody using neurotrophin-specific cell survival assays and X-ray crystal structures of both Fab-antigen complexes illustrated high fidelity retention of the NGF epitope. These results suggest the potential for wide applicability of the LSM method for optimization of well-characterized antibodies during humanization. PMID:24830649

  11. Acoustic measurement of bubble size and position in a piezo driven inkjet printhead

    NASA Astrophysics Data System (ADS)

    van der Bos, Arjan; Jeurissen, Roger; de Jong, Jos; Stevens, Richard; Versluis, Michel; Reinten, Hans; van den Berg, Marc; Wijshoff, Herman; Lohse, Detlef

    2008-11-01

    A bubble can be entrained in the ink channel of a piezo-driven inkjet printhead, where it grows by rectified diffusion. If large enough, the bubble counteracts the pressure buildup at the nozzle, resulting in nozzle failure. Here an acoustic sizing method for the volume and position of the bubble is presented. The bubble response is detected by the piezo actuator itself, operating in a sensor mode. The method used to determine the volume and position of the bubble is based on a linear model in which the interaction between the bubble and the channel are included. This model predicts the acoustic signal for a given position and volume of the bubble. The inverse problem is to infer the position and volume of the bubble from the measured acoustic signal. By solving it, we can thus acoustically measure size and position of the bubble. The validity of the presented method is supported by time-resolved optical observations of the dynamics of the bubble within an optically accessible ink-jet channel.

  12. Hierarchical clustering method for improved prostate cancer imaging in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Kavuri, Venkaiah C.; Liu, Hanli

    2013-03-01

    We investigate the feasibility of trans-rectal near infrared (NIR) based diffuse optical tomography (DOT) for early detection of prostate cancer using a transrectal ultrasound (TRUS) compatible imaging probe. For this purpose, we designed a TRUS-compatible, NIR-based image system (780nm), in which the photo diodes were placed on the trans-rectal probe. DC signals were recorded and used for estimating the absorption coefficient. We validated the system using laboratory phantoms. For further improvement, we also developed a hierarchical clustering method (HCM) to improve the accuracy of image reconstruction with limited prior information. We demonstrated the method using computer simulations laboratory phantom experiments.

  13. An expandable crosstalk reduction method for inline fiber Fabry-Pérot sensor array based on fiber Bragg gratings

    NASA Astrophysics Data System (ADS)

    Jiang, Peng; Ma, Lina; Hu, Zhengliang; Hu, Yongming

    2016-07-01

    The inline time division multiplexing (TDM) fiber Fabry-Pérot (FFP) sensor array based on fiber Bragg gratings (FBGs) is attractive for many applications. But the intrinsic multi-reflection (MR) induced crosstalk limits applications especially those needing high resolution. In this paper we proposed an expandable method for MR-induced crosstalk reduction. The method is based on complexing-exponent synthesis using the phase-generated carrier (PGC) scheme and the special common character of the impulse responses. The method could promote demodulation stability simultaneously with the reduction of MR-induced crosstalk. A polarization-maintaining 3-TDM experimental system with an FBG reflectivity of about 5 % was set up to validate the method. The experimental results showed that crosstalk reduction of 13 dB and 15 dB was achieved for sensor 2 and sensor 3 respectively when a signal was applied to the first sensor and crosstalk reduction of 8 dB was achieved for sensor 3 when a signal was applied to sensor 2. The demodulation stability of the applied signal was promoted as well. The standard deviations of the amplitude distributions of the demodulated signals were reduced from 0.0046 to 0.0021 for sensor 2 and from 0.0114 to 0.0044 for sensor 3. Because of the convenience of the linear operation of the complexing-exponent and according to the common character of the impulse response we found, the method can be effectively extended to the array with more TDM channels if the impulse response of the inline FFP sensor array with more TDM channels is derived. It offers potential to develop a low-crosstalk inline FFP sensor array using the PGC interrogation technique with relatively high reflectivity FBGs which can guarantee enough light power received by the photo-detector.

  14. WE-DE-207B-04: Quantitative Contrast-Enhanced Spectral Mammography Based On Photon-Counting Detectors: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, H; Zhou, B; Beidokhti, D

    Purpose: To investigate the feasibility of accurate quantification of iodine mass thickness in contrast-enhanced spectral mammography. Methods: Experimental phantom studies were performed on a spectral mammography system based on Si strip photon-counting detectors. Dual-energy images were acquired using 40 kVp and a splitting energy of 34 keV with 3 mm Al pre-filtration. The initial calibration was done with glandular and adipose tissue equivalent phantoms of uniform thicknesses and iodine disk phantoms of various concentrations. A secondary calibration was carried out using the iodine signal obtained from the dual-energy decomposed images and the known background phantom thicknesses and densities. The iodinemore » signal quantification method was validated using phantoms composed of a mixture of glandular and adipose materials, for various breast thicknesses and densities. Finally, the traditional dual-energy weighted subtraction method was also studied as a comparison. The measured iodine signal from both methods was compared to the known iodine concentrations of the disk phantoms to characterize the quantification accuracy. Results: There was good agreement between the iodine mass thicknesses measured using the proposed method and the known values. The root-mean-square (RMS) error was estimated to be 0.2 mg/cm2. The traditional weighted subtraction method also predicted a linear correlation between the measured signal and the known iodine mass thickness. However, the correlation slope and offset values were strongly dependent on the total breast thickness and density. Conclusion: The results of the current study suggest that iodine mass thickness can be accurately quantified with contrast-enhanced spectral mammography. The quantitative information can potentially improve the differentiation between benign and malignant legions. Grant funding from Philips Medical Systems.« less

  15. High accuracy differential pressure measurements using fluid-filled catheters - A feasibility study in compliant tubes.

    PubMed

    Rotman, Oren Moshe; Weiss, Dar; Zaretsky, Uri; Shitzer, Avraham; Einav, Shmuel

    2015-09-18

    High accuracy differential pressure measurements are required in various biomedical and medical applications, such as in fluid-dynamic test systems, or in the cath-lab. Differential pressure measurements using fluid-filled catheters are relatively inexpensive, yet may be subjected to common mode pressure errors (CMP), which can significantly reduce the measurement accuracy. Recently, a novel correction method for high accuracy differential pressure measurements was presented, and was shown to effectively remove CMP distortions from measurements acquired in rigid tubes. The purpose of the present study was to test the feasibility of this correction method inside compliant tubes, which effectively simulate arteries. Two tubes with varying compliance were tested under dynamic flow and pressure conditions to cover the physiological range of radial distensibility in coronary arteries. A third, compliant model, with a 70% stenosis severity was additionally tested. Differential pressure measurements were acquired over a 3 cm tube length using a fluid-filled double-lumen catheter, and were corrected using the proposed CMP correction method. Validation of the corrected differential pressure signals was performed by comparison to differential pressure recordings taken via a direct connection to the compliant tubes, and by comparison to predicted differential pressure readings of matching fluid-structure interaction (FSI) computational simulations. The results show excellent agreement between the experimentally acquired and computationally determined differential pressure signals. This validates the application of the CMP correction method in compliant tubes of the physiological range for up to intermediate size stenosis severity of 70%. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Method to calibrate phase fluctuation in polarization-sensitive swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragod, Deepa K.; Matcher, Stephen J.

    2011-07-01

    We present a phase fluctuation calibration method for polarization-sensitive swept-source optical coherence tomography (PS-SS-OCT) using continuous polarization modulation. The method uses a low-voltage broadband polarization modulator driven by a synchronized sinusoidal burst waveform rather than an asynchronous waveform, together with the removal of the global phases of the measured Jones matrices by the use of matrix normalization. This makes it possible to average the measured Jones matrices to remove the artifact due to the speckle noise of the signal in the sample without introducing auxiliary optical components into the sample arm. This method was validated on measurements of an equine tendon sample by the PS-SS-OCT system.

  17. The generalized Morse wavelet method to determine refractive index dispersion of dielectric films

    NASA Astrophysics Data System (ADS)

    Kocahan, Özlem; Özcan, Seçkin; Coşkun, Emre; Özder, Serhat

    2017-04-01

    The continuous wavelet transform (CWT) method is a useful tool for the determination of refractive index dispersion of dielectric films. Mother wavelet selection is an important factor for the accuracy of the results when using CWT. In this study, generalized Morse wavelet (GMW) was proposed as the mother wavelet because of having two degrees of freedom. The simulation studies, based on error calculations and Cauchy Coefficient comparisons, were presented and also the noisy signal was tested by CWT method with GMW. The experimental validity of this method was checked by D263 T schott glass having 100 μm thickness and the results were compared to those from the catalog value.

  18. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    PubMed Central

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-01-01

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels. PMID:25372618

  19. An SVM-based classifier for estimating the state of various rotating components in agro-industrial machinery with a vibration signal acquired from a single point on the machine chassis.

    PubMed

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-11-03

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels.

  20. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    PubMed

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.

  1. Simulation of fMRI signals to validate dynamic causal modeling estimation

    NASA Astrophysics Data System (ADS)

    Anandwala, Mobin; Siadat, Mohamad-Reza; Hadi, Shamil M.

    2012-03-01

    Through cognitive tasks certain brain areas are activated and also receive increased blood to them. This is modeled through a state system consisting of two separate parts one that deals with the neural node stimulation and the other blood response during that stimulation. The rationale behind using this state system is to validate existing analysis methods such as DCM to see what levels of noise they can handle. Using the forward Euler's method this system was approximated in a series of difference equations. What was obtained was the hemodynamic response for each brain area and this was used to test an analysis tool to estimate functional connectivity between each brain area with a given amount of noise. The importance of modeling this system is to not only have a model for neural response but also to compare to actual data obtained through functional imaging scans.

  2. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering

    PubMed Central

    Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun

    2016-01-01

    The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods. PMID:27258276

  3. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering.

    PubMed

    Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun

    2016-05-31

    The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.

  4. PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.

    PubMed

    Xia, Jing; Wang, Michelle Yongmei

    Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.

  5. BCI Competition IV – Data Set I: Learning Discriminative Patterns for Self-Paced EEG-Based Motor Imagery Detection

    PubMed Central

    Zhang, Haihong; Guan, Cuntai; Ang, Kai Keng; Wang, Chuanchu

    2012-01-01

    Detecting motor imagery activities versus non-control in brain signals is the basis of self-paced brain-computer interfaces (BCIs), but also poses a considerable challenge to signal processing due to the complex and non-stationary characteristics of motor imagery as well as non-control. This paper presents a self-paced BCI based on a robust learning mechanism that extracts and selects spatio-spectral features for differentiating multiple EEG classes. It also employs a non-linear regression and post-processing technique for predicting the time-series of class labels from the spatio-spectral features. The method was validated in the BCI Competition IV on Dataset I where it produced the lowest prediction error of class labels continuously. This report also presents and discusses analysis of the method using the competition data set. PMID:22347153

  6. Coupling analysis of high Q resonators in add-drop configuration through cavity ringdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Frigenti, G.; Arjmand, M.; Barucci, A.; Baldini, F.; Berneschi, S.; Farnesi, D.; Gianfreda, M.; Pelli, S.; Soria, S.; Aray, A.; Dumeige, Y.; Féron, P.; Nunzi Conti, G.

    2018-06-01

    An original method able to fully characterize high-Q resonators in an add-drop configuration has been implemented. The method is based on the study of two cavity ringdown (CRD) signals, which are produced at the transmission and drop ports by wavelength sweeping a resonance in a time interval comparable with the photon cavity lifetime. All the resonator parameters can be assessed with a single set of simultaneous measurements. We first developed a model describing the two CRD output signals and a fitting program able to deduce the key parameters from the measured profiles. We successfully validated the model with an experiment based on a fiber ring resonator of known characteristics. Finally, we characterized a high-Q, home-made, MgF2 whispering gallery mode disk resonator in the add-drop configuration, assessing its intrinsic and coupling parameters.

  7. Retrieving the Quantitative Chemical Information at Nanoscale from Scanning Electron Microscope Energy Dispersive X-ray Measurements by Machine Learning

    NASA Astrophysics Data System (ADS)

    Jany, B. R.; Janas, A.; Krok, F.

    2017-11-01

    The quantitative composition of metal alloy nanowires on InSb(001) semiconductor surface and gold nanostructures on germanium surface is determined by blind source separation (BSS) machine learning (ML) method using non negative matrix factorization (NMF) from energy dispersive X-ray spectroscopy (EDX) spectrum image maps measured in a scanning electron microscope (SEM). The BSS method blindly decomposes the collected EDX spectrum image into three source components, which correspond directly to the X-ray signals coming from the supported metal nanostructures, bulk semiconductor signal and carbon background. The recovered quantitative composition is validated by detailed Monte Carlo simulations and is confirmed by separate cross-sectional TEM EDX measurements of the nanostructures. This shows that SEM EDX measurements together with machine learning blind source separation processing could be successfully used for the nanostructures quantitative chemical composition determination.

  8. Method and Apparatus for the Portable Identification of Material Thickness and Defects Using Spatially Controlled Heat Application

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott (Inventor); Winfree, William P. (Inventor)

    1999-01-01

    A method and a portable apparatus for the nondestructive identification of defects in structures. The apparatus comprises a heat source and a thermal imager that move at a constant speed past a test surface of a structure. The thermal imager is off set at a predetermined distance from the heat source. The heat source induces a constant surface temperature. The imager follows the heat source and produces a video image of the thermal characteristics of the test surface. Material defects produce deviations from the constant surface temperature that move at the inverse of the constant speed. Thermal noise produces deviations that move at random speed. Computer averaging of the digitized thermal image data with respect to the constant speed minimizes noise and improves the signal of valid defects. The motion of thermographic equipment coupled with the high signal to noise ratio render it suitable for portable, on site analysis.

  9. A fault isolation method based on the incidence matrix of an augmented system

    NASA Astrophysics Data System (ADS)

    Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong

    2018-03-01

    A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.

  10. Calculation of susceptibility through multiple orientation sampling (COSMOS): a method for conditioning the inverse problem from measured magnetic field map to susceptibility source image in MRI.

    PubMed

    Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi

    2009-01-01

    Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.

  11. Iterative generalized time-frequency reassignment for planetary gearbox fault diagnosis under nonstationary conditions

    NASA Astrophysics Data System (ADS)

    Chen, Xiaowang; Feng, Zhipeng

    2016-12-01

    Planetary gearboxes are widely used in many sorts of machinery, for its large transmission ratio and high load bearing capacity in a compact structure. Their fault diagnosis relies on effective identification of fault characteristic frequencies. However, in addition to the vibration complexity caused by intricate mechanical kinematics, volatile external conditions result in time-varying running speed and/or load, and therefore nonstationary vibration signals. This usually leads to time-varying complex fault characteristics, and adds difficulty to planetary gearbox fault diagnosis. Time-frequency analysis is an effective approach to extracting the frequency components and their time variation of nonstationary signals. Nevertheless, the commonly used time-frequency analysis methods suffer from poor time-frequency resolution as well as outer and inner interferences, which hinder accurate identification of time-varying fault characteristic frequencies. Although time-frequency reassignment improves the time-frequency readability, it is essentially subject to the constraints of mono-component and symmetric time-frequency distribution about true instantaneous frequency. Hence, it is still susceptible to erroneous energy reallocation or even generates pseudo interferences, particularly for multi-component signals of highly nonlinear instantaneous frequency. In this paper, to overcome the limitations of time-frequency reassignment, we propose an improvement with fine time-frequency resolution and free from interferences for highly nonstationary multi-component signals, by exploiting the merits of iterative generalized demodulation. The signal is firstly decomposed into mono-components of constant frequency by iterative generalized demodulation. Time-frequency reassignment is then applied to each generalized demodulated mono-component, obtaining a fine time-frequency distribution. Finally, the time-frequency distribution of each signal component is restored and superposed to get the time-frequency distribution of original signal. The proposed method is validated using both numerical simulated and lab experimental planetary gearbox vibration signals. The time-varying gear fault symptoms are successfully extracted, showing effectiveness of the proposed iterative generalized time-frequency reassignment method in planetary gearbox fault diagnosis under nonstationary conditions.

  12. A wavelet-based ECG delineation algorithm for 32-bit integer online processing.

    PubMed

    Di Marco, Luigi Y; Chiari, Lorenzo

    2011-04-03

    Since the first well-known electrocardiogram (ECG) delineator based on Wavelet Transform (WT) presented by Li et al. in 1995, a significant research effort has been devoted to the exploitation of this promising method. Its ability to reliably delineate the major waveform components (mono- or bi-phasic P wave, QRS, and mono- or bi-phasic T wave) would make it a suitable candidate for efficient online processing of ambulatory ECG signals. Unfortunately, previous implementations of this method adopt non-linear operators such as root mean square (RMS) or floating point algebra, which are computationally demanding. This paper presents a 32-bit integer, linear algebra advanced approach to online QRS detection and P-QRS-T waves delineation of a single lead ECG signal, based on WT. The QRS detector performance was validated on the MIT-BIH Arrhythmia Database (sensitivity Se = 99.77%, positive predictive value P+ = 99.86%, on 109010 annotated beats) and on the European ST-T Database (Se = 99.81%, P+ = 99.56%, on 788050 annotated beats). The ECG delineator was validated on the QT Database, showing a mean error between manual and automatic annotation below 1.5 samples for all fiducial points: P-onset, P-peak, P-offset, QRS-onset, QRS-offset, T-peak, T-offset, and a mean standard deviation comparable to other established methods. The proposed algorithm exhibits reliable QRS detection as well as accurate ECG delineation, in spite of a simple structure built on integer linear algebra.

  13. Analysis of calibration-free wavelength-scanned wavelength modulation spectroscopy for practical gas sensing using tunable diode lasers

    NASA Astrophysics Data System (ADS)

    Sun, K.; Chao, X.; Sur, R.; Goldenstein, C. S.; Jeffries, J. B.; Hanson, R. K.

    2013-12-01

    A novel strategy has been developed for analysis of wavelength-scanned, wavelength modulation spectroscopy (WMS) with tunable diode lasers (TDLs). The method simulates WMS signals to compare with measurements to determine gas properties (e.g., temperature, pressure and concentration of the absorbing species). Injection-current-tuned TDLs have simultaneous wavelength and intensity variation, which severely complicates the Fourier expansion of the simulated WMS signal into harmonics of the modulation frequency (fm). The new method differs from previous WMS analysis strategies in two significant ways: (1) the measured laser intensity is used to simulate the transmitted laser intensity and (2) digital lock-in and low-pass filter software is used to expand both simulated and measured transmitted laser intensities into harmonics of the modulation frequency, WMS-nfm (n = 1, 2, 3,…), avoiding the need for an analytic model of intensity modulation or Fourier expansion of the simulated WMS harmonics. This analysis scheme is valid at any optical depth, modulation index, and at all values of scanned-laser wavelength. The method is demonstrated and validated with WMS of H2O dilute in air (1 atm, 296 K, near 1392 nm). WMS-nfm harmonics for n = 1 to 6 are extracted and the simulation and measurements are found in good agreement for the entire WMS lineshape. The use of 1f-normalization strategies to realize calibration-free wavelength-scanned WMS is also discussed.

  14. A New Pulse Pileup Rejection Method Based on Position Shift Identification

    NASA Astrophysics Data System (ADS)

    Gu, Z.; Prout, D. L.; Taschereau, R.; Bai, B.; Chatziioannou, A. F.

    2016-02-01

    Pulse pileup events degrade the signal-to-noise ratio (SNR) of nuclear medicine data. When such events occur in multiplexed detectors, they cause spatial misposition, energy spectrum distortion and degraded timing resolution, which leads to image artifacts. Pulse pileup is pronounced in PETbox4, a bench top PET scanner dedicated to high sensitivity and high resolution imaging of mice. In that system, the combination of high absolute sensitivity, long scintillator decay time (BGO) and highly multiplexed electronics lead to a significant fraction of pulse pileup, reached at lower total activity than for comparable instruments. In this manuscript, a new pulse pileup rejection method named position shift rejection (PSR) is introduced. The performance of PSR is compared with a conventional leading edge rejection (LER) method and with no pileup rejection implemented (NoPR). A comprehensive digital pulse library was developed for objective evaluation and optimization of the PSR and LER, in which pulse waveforms were directly recorded from real measurements exactly representing the signals to be processed. Physical measurements including singles event acquisition, peak system sensitivity and NEMA NU-4 image quality phantom were also performed in the PETbox4 system to validate and compare the different pulse pile-up rejection methods. The evaluation of both physical measurements and model pulse trains demonstrated that the new PSR performs more accurate pileup event identification and avoids erroneous rejection of valid events. For the PETbox4 system, this improvement leads to a significant recovery of sensitivity at low count rates, amounting to about 1/4th of the expected true coincidence events, compared to the LER method. Furthermore, with the implementation of PSR, optimal image quality can be achieved near the peak noise equivalent count rate (NECR).

  15. Detection of driving fatigue by using noncontact EMG and ECG signals measurement system.

    PubMed

    Fu, Rongrong; Wang, Hong

    2014-05-01

    Driver fatigue can be detected by constructing a discriminant mode using some features obtained from physiological signals. There exist two major challenges of this kind of methods. One is how to collect physiological signals from subjects while they are driving without any interruption. The other is to find features of physiological signals that are of corresponding change with the loss of attention caused by driver fatigue. Driving fatigue is detected based on the study of surface electromyography (EMG) and electrocardiograph (ECG) during the driving period. The noncontact data acquisition system was used to collect physiological signals from the biceps femoris of each subject to tackle the first challenge. Fast independent component analysis (FastICA) and digital filter were utilized to process the original signals. Based on the statistical analysis results given by Kolmogorov-Smirnov Z test, the peak factor of EMG (p < 0.001) and the maximum of the cross-relation curve of EMG and ECG (p < 0.001) were selected as the combined characteristic to detect fatigue of drivers. The discriminant criterion of fatigue was obtained from the training samples by using Mahalanobis distance, and then the average classification accuracy was given by 10-fold cross-validation. The results showed that the method proposed in this paper can give well performance in distinguishing the normal state and fatigue state. The noncontact, onboard vehicle drivers' fatigue detection system was developed to reduce fatigue-related risks.

  16. Kinome-wide Decoding of Network-Attacking Mutations Rewiring Cancer Signaling

    PubMed Central

    Creixell, Pau; Schoof, Erwin M.; Simpson, Craig D.; Longden, James; Miller, Chad J.; Lou, Hua Jane; Perryman, Lara; Cox, Thomas R.; Zivanovic, Nevena; Palmeri, Antonio; Wesolowska-Andersen, Agata; Helmer-Citterich, Manuela; Ferkinghoff-Borg, Jesper; Itamochi, Hiroaki; Bodenmiller, Bernd; Erler, Janine T.; Turk, Benjamin E.; Linding, Rune

    2015-01-01

    Summary Cancer cells acquire pathological phenotypes through accumulation of mutations that perturb signaling networks. However, global analysis of these events is currently limited. Here, we identify six types of network-attacking mutations (NAMs), including changes in kinase and SH2 modulation, network rewiring, and the genesis and extinction of phosphorylation sites. We developed a computational platform (ReKINect) to identify NAMs and systematically interpreted the exomes and quantitative (phospho-)proteomes of five ovarian cancer cell lines and the global cancer genome repository. We identified and experimentally validated several NAMs, including PKCγ M501I and PKD1 D665N, which encode specificity switches analogous to the appearance of kinases de novo within the kinome. We discover mutant molecular logic gates, a drift toward phospho-threonine signaling, weakening of phosphorylation motifs, and kinase-inactivating hotspots in cancer. Our method pinpoints functional NAMs, scales with the complexity of cancer genomes and cell signaling, and may enhance our capability to therapeutically target tumor-specific networks. PMID:26388441

  17. Developing a 'personalome' for precision medicine: emerging methods that compute interpretable effect sizes from single-subject transcriptomes.

    PubMed

    Vitali, Francesca; Li, Qike; Schissler, A Grant; Berghout, Joanne; Kenost, Colleen; Lussier, Yves A

    2017-12-18

    The development of computational methods capable of analyzing -omics data at the individual level is critical for the success of precision medicine. Although unprecedented opportunities now exist to gather data on an individual's -omics profile ('personalome'), interpreting and extracting meaningful information from single-subject -omics remain underdeveloped, particularly for quantitative non-sequence measurements, including complete transcriptome or proteome expression and metabolite abundance. Conventional bioinformatics approaches have largely been designed for making population-level inferences about 'average' disease processes; thus, they may not adequately capture and describe individual variability. Novel approaches intended to exploit a variety of -omics data are required for identifying individualized signals for meaningful interpretation. In this review-intended for biomedical researchers, computational biologists and bioinformaticians-we survey emerging computational and translational informatics methods capable of constructing a single subject's 'personalome' for predicting clinical outcomes or therapeutic responses, with an emphasis on methods that provide interpretable readouts. (i) the single-subject analytics of the transcriptome shows the greatest development to date and, (ii) the methods were all validated in simulations, cross-validations or independent retrospective data sets. This survey uncovers a growing field that offers numerous opportunities for the development of novel validation methods and opens the door for future studies focusing on the interpretation of comprehensive 'personalomes' through the integration of multiple -omics, providing valuable insights into individual patient outcomes and treatments. © The Author 2017. Published by Oxford University Press.

  18. Rolling bearing fault diagnosis and health assessment using EEMD and the adjustment Mahalanobis-Taguchi system

    NASA Astrophysics Data System (ADS)

    Chen, Junxun; Cheng, Longsheng; Yu, Hui; Hu, Shaolin

    2018-01-01

    ABSTRACTSFor the timely identification of the potential faults of a rolling bearing and to observe its health condition intuitively and accurately, a novel fault diagnosis and health assessment model for a rolling bearing based on the ensemble empirical mode decomposition (EEMD) method and the adjustment Mahalanobis-Taguchi system (AMTS) method is proposed. The specific steps are as follows: First, the vibration signal of a rolling bearing is decomposed by EEMD, and the extracted features are used as the input vectors of AMTS. Then, the AMTS method, which is designed to overcome the shortcomings of the traditional Mahalanobis-Taguchi system and to extract the key features, is proposed for fault diagnosis. Finally, a type of HI concept is proposed according to the results of the fault diagnosis to accomplish the health assessment of a bearing in its life cycle. To validate the superiority of the developed method proposed approach, it is compared with other recent method and proposed methodology is successfully validated on a vibration data-set acquired from seeded defects and from an accelerated life test. The results show that this method represents the actual situation well and is able to accurately and effectively identify the fault type.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4610425','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4610425"><span>Transferring Data from Smartwatch to Smartphone through Mechanical Wave Propagation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kim, Seung-Chan; Lim, Soo-Chul</p> <p>2015-01-01</p> <p>Inspired by the mechanisms of bone conduction transmission, we present a novel sensor and actuation system that enables a smartwatch to securely communicate with a peripheral touch device, such as a smartphone. Our system regards hand structures as a mechanical waveguide that transmits particular signals through mechanical waves. As a signal, we used high-frequency vibrations (18.0–20.0 kHz) so that users cannot sense the signals either tactually or audibly. To this end, we adopted a commercial surface transducer, which is originally developed as a bone-conduction actuator, for mechanical signal generation. At the receiver side, a piezoelement was adopted for picking up the transferred mechanical signals. Experimental results have shown that the proposed system can successfully transfer data using mechanical waves. We also validate dual-frequency actuations under which high-frequency signals (18.0–20.0 kHz) are generated along with low-frequency (up to 250 Hz) haptic vibrations. The proposed method has advantages in terms of security in that it does not reveal the signals outside the body, meaning that it is not possible for attackers to eavesdrop on the signals. To further illustrate the possible application spaces, we conclude with explorations of the proposed approach. PMID:26343674</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27370787','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27370787"><span>Magneto-acoustic imaging by continuous-wave excitation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shunqi, Zhang; Zhou, Xiaoqing; Tao, Yin; Zhipeng, Liu</p> <p>2017-04-01</p> <p>The electrical characteristics of tissue yield valuable information for early diagnosis of pathological changes. Magneto-acoustic imaging is a functional approach for imaging of electrical conductivity. This study proposes a continuous-wave magneto-acoustic imaging method. A kHz-range continuous signal with an amplitude range of several volts is used to excite the magneto-acoustic signal and improve the signal-to-noise ratio. The magneto-acoustic signal amplitude and phase are measured to locate the acoustic source via lock-in technology. An optimisation algorithm incorporating nonlinear equations is used to reconstruct the magneto-acoustic source distribution based on the measured amplitude and phase at various frequencies. Validation simulations and experiments were performed in pork samples. The experimental and simulation results agreed well. While the excitation current was reduced to 10 mA, the acoustic signal magnitude increased up to 10 -7  Pa. Experimental reconstruction of the pork tissue showed that the image resolution reached mm levels when the excitation signal was in the kHz range. The signal-to-noise ratio of the detected magneto-acoustic signal was improved by more than 25 dB at 5 kHz when compared to classical 1 MHz pulse excitation. The results reported here will aid further research into magneto-acoustic generation mechanisms and internal tissue conductivity imaging.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12376129','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12376129"><span>Quantitative determination of 5-hydroxy-N-methylpyrrolidone in urine for biological monitoring of N-methylpyrrolidone exposure.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ligocka, D; Lison, D; Haufroid, V</p> <p>2002-10-05</p> <p>The aim of this work was to validate a sensitive method for quantitative analysis of 5-hydroxy-N-methylpyrrolidone (5-HNMP) in urine. This compound has been recommended as a marker for biological monitoring of N-methylpyrrolidone (NMP) exposure. Different solvents and alternative methods of extraction including liquid-liquid extraction (LLE) on Chem Elut and solid-phase extraction (SPE) on Oasis HLB columns were tested. The most efficient extraction of 5-HNMP in urine was LLE with Chem Elut columns and dichloromethane as a solvent (consistently 22% of recovery). The urinary extracts were derivatized by bis(trimethylsilyl)trifluoroacetamide and analysed by gas chromatography-mass spectrometry (GC-MS) with tetradeutered 5-HNMP as an internal standard. The detection limit of this method is 0.017 mg/l urine with an intraassay precision of 1.6-2.6%. The proposed method of extraction is simple and reproducible. Four different m/z signal ratios of TMS-5-HNMP and tetralabelled TMS-5-HNMP have been validated and could be indifferently used in case of unexpected impurities from urine matrix. Copyright 2002 Elsevier Science B.V.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18331979','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18331979"><span>Automatic identification of bird targets with radar via patterns produced by wing flapping.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zaugg, Serge; Saporta, Gilbert; van Loon, Emiel; Schmaljohann, Heiko; Liechti, Felix</p> <p>2008-09-06</p> <p>Bird identification with radar is important for bird migration research, environmental impact assessments (e.g. wind farms), aircraft security and radar meteorology. In a study on bird migration, radar signals from birds, insects and ground clutter were recorded. Signals from birds show a typical pattern due to wing flapping. The data were labelled by experts into the four classes BIRD, INSECT, CLUTTER and UFO (unidentifiable signals). We present a classification algorithm aimed at automatic recognition of bird targets. Variables related to signal intensity and wing flapping pattern were extracted (via continuous wavelet transform). We used support vector classifiers to build predictive models. We estimated classification performance via cross validation on four datasets. When data from the same dataset were used for training and testing the classifier, the classification performance was extremely to moderately high. When data from one dataset were used for training and the three remaining datasets were used as test sets, the performance was lower but still extremely to moderately high. This shows that the method generalizes well across different locations or times. Our method provides a substantial gain of time when birds must be identified in large collections of radar signals and it represents the first substantial step in developing a real time bird identification radar system. We provide some guidelines and ideas for future research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29597283','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29597283"><span>Individual Biometric Identification Using Multi-Cycle Electrocardiographic Waveform Patterns.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Wonki; Kim, Seulgee; Kim, Daeeun</p> <p>2018-03-28</p> <p>The electrocardiogram (ECG) waveform conveys information regarding the electrical property of the heart. The patterns vary depending on the individual heart characteristics. ECG features can be potentially used for biometric recognition. This study presents a new method using the entire ECG waveform pattern for matching and demonstrates that the approach can potentially be employed for individual biometric identification. Multi-cycle ECG signals were assessed using an ECG measuring circuit, and three electrodes can be patched on the wrists or fingers for considering various measurements. For biometric identification, our-fold cross validation was used in the experiments for assessing how the results of a statistical analysis will generalize to an independent data set. Four different pattern matching algorithms, i.e., cosine similarity, cross correlation, city block distance, and Euclidean distances, were tested to compare the individual identification performances with a single channel of ECG signal (3-wire ECG). To evaluate the pattern matching for biometric identification, the ECG recordings for each subject were partitioned into training and test set. The suggested method obtained a maximum performance of 89.9% accuracy with two heartbeats of ECG signals measured on the wrist and 93.3% accuracy with three heartbeats for 55 subjects. The performance rate with ECG signals measured on the fingers improved up to 99.3% with two heartbeats and 100% with three heartbeats of signals for 20 subjects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018OptCo.417...89X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018OptCo.417...89X"><span>Simulating return signals of a spaceborne high-spectral resolution lidar channel at 532 nm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xiao, Yu; Binglong, Chen; Min, Min; Xingying, Zhang; Lilin, Yao; Yiming, Zhao; Lidong, Wang; Fu, Wang; Xiaobo, Deng</p> <p>2018-06-01</p> <p>High spectral resolution lidar (HSRL) system employs a narrow spectral filter to separate the particulate (cloud/aerosol) and molecular scattering components in lidar return signals, which improves the quality of the retrieved cloud/aerosol optical properties. To better develop a future spaceborne HSRL system, a novel simulation technique was developed to simulate spaceborne HSRL return signals at 532 nm using the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) cloud/aerosol extinction coefficients product and numerical weather prediction data. For validating simulated data, a mathematical particulate extinction coefficient retrieval method for spaceborne HSRL return signals is described here. We compare particulate extinction coefficient profiles from the CALIPSO operational product with simulated spaceborne HSRL data. Further uncertainty analysis shows that relative uncertainties are acceptable for retrieving the optical properties of cloud and aerosol. The final results demonstrate that they agree well with each other. It indicates that the return signals of the spaceborne HSRL molecular channel at 532 nm will be suitable for developing operational algorithms supporting a future spaceborne HSRL system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.G23B..06Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.G23B..06Y"><span>A new method to estimate global mass transport and its implication for sea level rise</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yi, S.; Heki, K.</p> <p>2017-12-01</p> <p>Estimates of changes in global land mass by using GRACE observations can be achieved by two methods, a mascon method and a forward modeling method. However, results from these two methods show inconsistent secular trend. Sea level budget can be adopted to validate the consistency among observations of sea level rise by altimetry, steric change by the Argo project, and mass change by GRACE. Mascon products from JPL, GSFC and CSR are compared here, we find that all these three products cannot achieve a reconciled sea level budget, while this problem can be solved by a new forward modeling method. We further investigate the origin of this difference, and speculate that it is caused by the signal leakage from the ocean mass. Generally, it is well recognized that land signals leak into oceans, but it also happens the other way around. We stress the importance of correction of leakage from the ocean in the estimation of global land masses. Based on a reconciled sea level budget, we confirmed that global sea level rise has been accelerating significantly over 2005-2015, as a result of the ongoing global temperature increase.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4570377','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4570377"><span>Ocean Wave Separation Using CEEMD-Wavelet in GPS Wave Measurement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Junjie; He, Xiufeng; Ferreira, Vagner G.</p> <p>2015-01-01</p> <p>Monitoring ocean waves plays a crucial role in, for example, coastal environmental and protection studies. Traditional methods for measuring ocean waves are based on ultrasonic sensors and accelerometers. However, the Global Positioning System (GPS) has been introduced recently and has the advantage of being smaller, less expensive, and not requiring calibration in comparison with the traditional methods. Therefore, for accurately measuring ocean waves using GPS, further research on the separation of the wave signals from the vertical GPS-mounted carrier displacements is still necessary. In order to contribute to this topic, we present a novel method that combines complementary ensemble empirical mode decomposition (CEEMD) with a wavelet threshold denoising model (i.e., CEEMD-Wavelet). This method seeks to extract wave signals with less residual noise and without losing useful information. Compared with the wave parameters derived from the moving average skill, high pass filter and wave gauge, the results show that the accuracy of the wave parameters for the proposed method was improved with errors of about 2 cm and 0.2 s for mean wave height and mean period, respectively, verifying the validity of the proposed method. PMID:26262620</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/266730','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/266730"><span>Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Stoneking, M.R.; Den Hartog, D.J.</p> <p>1996-06-01</p> <p>The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17094243','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17094243"><span>High-accuracy peak picking of proteomics data using wavelet techniques.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lange, Eva; Gröpl, Clemens; Reinert, Knut; Kohlbacher, Oliver; Hildebrandt, Andreas</p> <p>2006-01-01</p> <p>A new peak picking algorithm for the analysis of mass spectrometric (MS) data is presented. It is independent of the underlying machine or ionization method, and is able to resolve highly convoluted and asymmetric signals. The method uses the multiscale nature of spectrometric data by first detecting the mass peaks in the wavelet-transformed signal before a given asymmetric peak function is fitted to the raw data. In an optional third stage, the resulting fit can be further improved using techniques from nonlinear optimization. In contrast to currently established techniques (e.g. SNAP, Apex) our algorithm is able to separate overlapping peaks of multiply charged peptides in ESI-MS data of low resolution. Its improved accuracy with respect to peak positions makes it a valuable preprocessing method for MS-based identification and quantification experiments. The method has been validated on a number of different annotated test cases, where it compares favorably in both runtime and accuracy with currently established techniques. An implementation of the algorithm is freely available in our open source framework OpenMS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016InPhT..78..147L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016InPhT..78..147L"><span>Tensor Fukunaga-Koontz transform for small target detection in infrared images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Ruiming; Wang, Jingzhuo; Yang, Huizhen; Gong, Chenglong; Zhou, Yuanshen; Liu, Lipeng; Zhang, Zhen; Shen, Shuli</p> <p>2016-09-01</p> <p>Infrared small targets detection plays a crucial role in warning and tracking systems. Some novel methods based on pattern recognition technology catch much attention from researchers. However, those classic methods must reshape images into vectors with the high dimensionality. Moreover, vectorizing breaks the natural structure and correlations in the image data. Image representation based on tensor treats images as matrices and can hold the natural structure and correlation information. So tensor algorithms have better classification performance than vector algorithms. Fukunaga-Koontz transform is one of classification algorithms and it is a vector version method with the disadvantage of all vector algorithms. In this paper, we first extended the Fukunaga-Koontz transform into its tensor version, tensor Fukunaga-Koontz transform. Then we designed a method based on tensor Fukunaga-Koontz transform for detecting targets and used it to detect small targets in infrared images. The experimental results, comparison through signal-to-clutter, signal-to-clutter gain and background suppression factor, have validated the advantage of the target detection based on the tensor Fukunaga-Koontz transform over that based on the Fukunaga-Koontz transform.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28689352','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28689352"><span>Synapse fits neuron: joint reduction by model inversion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van der Scheer, H T; Doelman, A</p> <p>2017-08-01</p> <p>In this paper, we introduce a novel simplification method for dealing with physical systems that can be thought to consist of two subsystems connected in series, such as a neuron and a synapse. The aim of our method is to help find a simple, yet convincing model of the full cascade-connected system, assuming that a satisfactory model of one of the subsystems, e.g., the neuron, is already given. Our method allows us to validate a candidate model of the full cascade against data at a finer scale. In our main example, we apply our method to part of the squid's giant fiber system. We first postulate a simple, hypothetical model of cell-to-cell signaling based on the squid's escape response. Then, given a FitzHugh-type neuron model, we derive the verifiable model of the squid giant synapse that this hypothesis implies. We show that the derived synapse model accurately reproduces synaptic recordings, hence lending support to the postulated, simple model of cell-to-cell signaling, which thus, in turn, can be used as a basic building block for network models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011TSICE..46..401I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011TSICE..46..401I"><span>Automated Discrimination Method of Muscular and Subcutaneous Fat Layers Based on Tissue Elasticity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Inoue, Masahiro; Fukuda, Osamu; Tsubai, Masayoshi; Muraki, Satoshi; Okumura, Hiroshi; Arai, Kohei</p> <p></p> <p>Balance between human body composition, e.g. bones, muscles, and fat, is a major and basic indicator of personal health. Body composition analysis using ultrasound has been developed rapidly. However, interpretation of echo signal is conducted manually, and accuracy and confidence in interpretation requires experience. This paper proposes an automated discrimination method of tissue boundaries for measuring the thickness of subcutaneous fat and muscular layers. A portable one-dimensional ultrasound device was used in this study. The proposed method discriminated tissue boundaries based on tissue elasticity. Validity of the proposed method was evaluated in twenty-one subjects (twelve women, nine men; aged 20-70 yr) at three anatomical sites. Experimental results show that the proposed method can achieve considerably high discrimination performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22262615','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22262615"><span>A systematic review of validated methods for identifying acute respiratory failure using administrative and claims data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W</p> <p>2012-01-01</p> <p>The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28890846','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28890846"><span>Prediction of endoplasmic reticulum resident proteins using fragmented amino acid composition and support vector machine.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kumar, Ravindra; Kumari, Bandana; Kumar, Manish</p> <p>2017-01-01</p> <p>The endoplasmic reticulum plays an important role in many cellular processes, which includes protein synthesis, folding and post-translational processing of newly synthesized proteins. It is also the site for quality control of misfolded proteins and entry point of extracellular proteins to the secretory pathway. Hence at any given point of time, endoplasmic reticulum contains two different cohorts of proteins, (i) proteins involved in endoplasmic reticulum-specific function, which reside in the lumen of the endoplasmic reticulum, called as endoplasmic reticulum resident proteins and (ii) proteins which are in process of moving to the extracellular space. Thus, endoplasmic reticulum resident proteins must somehow be distinguished from newly synthesized secretory proteins, which pass through the endoplasmic reticulum on their way out of the cell. Approximately only 50% of the proteins used in this study as training data had endoplasmic reticulum retention signal, which shows that these signals are not essentially present in all endoplasmic reticulum resident proteins. This also strongly indicates the role of additional factors in retention of endoplasmic reticulum-specific proteins inside the endoplasmic reticulum. This is a support vector machine based method, where we had used different forms of protein features as inputs for support vector machine to develop the prediction models. During training leave-one-out approach of cross-validation was used. Maximum performance was obtained with a combination of amino acid compositions of different part of proteins. In this study, we have reported a novel support vector machine based method for predicting endoplasmic reticulum resident proteins, named as ERPred. During training we achieved a maximum accuracy of 81.42% with leave-one-out approach of cross-validation. When evaluated on independent dataset, ERPred did prediction with sensitivity of 72.31% and specificity of 83.69%. We have also annotated six different proteomes to predict the candidate endoplasmic reticulum resident proteins in them. A webserver, ERPred, was developed to make the method available to the scientific community, which can be accessed at http://proteininformatics.org/mkumar/erpred/index.html. We found that out of 124 proteins of the training dataset, only 66 proteins had endoplasmic reticulum retention signals, which shows that these signals are not an absolute necessity for endoplasmic reticulum resident proteins to remain inside the endoplasmic reticulum. This observation also strongly indicates the role of additional factors in retention of proteins inside the endoplasmic reticulum. Our proposed predictor, ERPred, is a signal independent tool. It is tuned for the prediction of endoplasmic reticulum resident proteins, even if the query protein does not contain specific ER-retention signal.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27393799','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27393799"><span>Using meta-differential evolution to enhance a calculation of a continuous blood glucose level.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Koutny, Tomas</p> <p>2016-09-01</p> <p>We developed a new model of glucose dynamics. The model calculates blood glucose level as a function of transcapillary glucose transport. In previous studies, we validated the model with animal experiments. We used analytical method to determine model parameters. In this study, we validate the model with subjects with type 1 diabetes. In addition, we combine the analytic method with meta-differential evolution. To validate the model with human patients, we obtained a data set of type 1 diabetes study that was coordinated by Jaeb Center for Health Research. We calculated a continuous blood glucose level from continuously measured interstitial fluid glucose level. We used 6 different scenarios to ensure robust validation of the calculation. Over 96% of calculated blood glucose levels fit A+B zones of the Clarke Error Grid. No data set required any correction of model parameters during the time course of measuring. We successfully verified the possibility of calculating a continuous blood glucose level of subjects with type 1 diabetes. This study signals a successful transition of our research from an animal experiment to a human patient. Researchers can test our model with their data on-line at https://diabetes.zcu.cz. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2745974','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2745974"><span>Development, validation, and comparison of ICA-based gradient artifact reduction algorithms for simultaneous EEG-spiral in/out and echo-planar fMRI recordings</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ryali, S; Glover, GH; Chang, C; Menon, V</p> <p>2009-01-01</p> <p>EEG data acquired in an MRI scanner are heavily contaminated by gradient artifacts that can significantly compromise signal quality. We developed two new methods based on Independent Component Analysis (ICA) for reducing gradient artifacts from spiral in-out and echo-planar pulse sequences at 3T, and compared our algorithms with four other commonly used methods: average artifact subtraction (Allen et al. 2000), principal component analysis (Niazy et al. 2005), Taylor series (Wan et al. 2006) and a conventional temporal ICA algorithm. Models of gradient artifacts were derived from simulations as well as a water phantom and performance of each method was evaluated on datasets constructed using visual event-related potentials (ERPs) as well as resting EEG. Our new methods recovered ERPs and resting EEG below the beta band (< 12.5 Hz) with high signal-to-noise ratio (SNR > 4). Our algorithms outperformed all of these methods on resting EEG in the theta- and alpha-bands (SNR > 4); however, for all methods, signal recovery was modest (SNR ~ 1) in the beta-band and poor (SNR < 0.3) in the gamma-band and above. We found that the conventional ICA algorithm performed poorly with uniformly low SNR (< 0.1). Taken together, our new ICA-based methods offer a more robust technique for gradient artifact reduction when scanning at 3T using spiral in-out and echo-planar pulse sequences. We provide new insights into the strengths and weaknesses of each method using a unified subspace framework. PMID:19580873</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA598461','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA598461"><span>Real Time Location of Targets in Cluttered Environments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2014-03-13</p> <p>7 Return Signal computation from a single wind turbine ...7 Return Signal From Multiple Wind Turbines With and Without Aircraft...signals to the far field. 2. Validated using analytical signals. 3. Inner field scattering from a wind turbine and aircraft is computed 4. An</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5026571','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5026571"><span>Analytic Validation of RNA In Situ Hybridization (RISH) for AR and AR-V7 Expression in Human Prostate Cancer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Guedes, Liana B.; Morais, Carlos L.; Almutairi, Fawaz; Haffner, Michael C.; Zheng, Qizhi; Isaacs, John T.; Antonarakis, Emmanuel S.; Lu, Changxue; Tsai, Harrison; Luo, Jun; De Marzo, Angelo M.; Lotan, Tamara L.</p> <p>2016-01-01</p> <p>Purpose RNA expression of androgen receptor splice variants may be a biomarker of resistance to novel androgen deprivation therapies in castrate resistant prostate cancer (CRPC). We analytically validated an RNA in situ hybridization (RISH) assay for total AR and AR-V7 for use in formalin fixed paraffin embedded (FFPE) prostate tumors. Experimental Design We used prostate cell lines and xenografts to validate chromogenic RISH to detect RNA containing AR exon 1 (AR-E1, surrogate for total AR RNA species) and cryptic exon 3 (AR-CE3, surrogate for AR-V7 expression). RISH signals were quantified in FFPE primary tumors and CRPC specimens, comparing to known AR and AR-V7 status by immunohistochemistry and RT-PCR. Results The quantified RISH results correlated significantly with total AR and AR-V7 levels by RT-PCR in cell lines, xenografts and autopsy metastases. Both AR-E1 and AR-CE3 RISH signals were localized in nuclear punctae in addition to the expected cytoplasmic speckles. Compared to admixed benign glands, AR-E1 expression was significantly higher in primary tumor cells with a median fold increase of 3.0 and 1.4 in two independent cohorts (p<0.0001 and p=0.04, respectively). While AR-CE3 expression was detectable in primary prostatic tumors, levels were substantially higher in a subset of CRPC metastases and cell lines, and were correlated with AR-E1 expression. Conclusions RISH for AR-E1 and AR-CE3 is an analytically valid method to examine total AR and AR-V7 RNA levels in FFPE tissues. Future clinical validation studies are required to determine whether AR RISH is a prognostic or predictive biomarker in specific clinical contexts. PMID:27166397</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20438276','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20438276"><span>Relations between inductive reasoning and deductive reasoning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Heit, Evan; Rotello, Caren M</p> <p>2010-05-01</p> <p>One of the most important open questions in reasoning research is how inductive reasoning and deductive reasoning are related. In an effort to address this question, we applied methods and concepts from memory research. We used 2 experiments to examine the effects of logical validity and premise-conclusion similarity on evaluation of arguments. Experiment 1 showed 2 dissociations: For a common set of arguments, deduction judgments were more affected by validity, and induction judgments were more affected by similarity. Moreover, Experiment 2 showed that fast deduction judgments were like induction judgments-in terms of being more influenced by similarity and less influenced by validity, compared with slow deduction judgments. These novel results pose challenges for a 1-process account of reasoning and are interpreted in terms of a 2-process account of reasoning, which was implemented as a multidimensional signal detection model and applied to receiver operating characteristic data. PsycINFO Database Record (c) 2010 APA, all rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28260954','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28260954"><span>Low-power system for the acquisition of the respiratory signal of neonates using diaphragmatic electromyography.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Torres, Róbinson; López-Isaza, Sergio; Mejía-Mejía, Elisa; Paniagua, Viviana; González, Víctor</p> <p>2017-01-01</p> <p>An apnea episode is defined as the cessation of breathing for ≥15 seconds or as any suspension of breathing accompanied by hypoxia and bradycardia. Obtaining information about the respiratory system in a neonate can be accomplished using electromyography signals from the diaphragm muscle. The purpose of this paper is to illustrate a method by which the respiratory and electrocardiographic signals from neonates can be obtained using diaphragmatic electromyography. The system was developed using single-supply, micropower components, which deliver a low-power consumption system appropriate for the development of portable devices. The stages of the system were tested in both adult and neonate patients. The system delivers signals as those expected in both patients and allows the acquisition of respiratory signals directly from the diaphragmatic electromyography. This low-power system may present a good alternative for monitoring the cardiac and respiratory activity in newborn babies, both in the hospital and at home. The system delivers good signals but needs to be validated for its use in neonates. It is being used in the Neonatal Intensive Care Unit of the Hospital General de Medellín Luz Castro de Gutiérrez.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4408984','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4408984"><span>Neuroreceptor Activation by Vibration-Assisted Tunneling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hoehn, Ross D.; Nichols, David; Neven, Hartmut; Kais, Sabre</p> <p>2015-01-01</p> <p>G protein-coupled receptors (GPCRs) constitute a large family of receptor proteins that sense molecular signals on the exterior of a cell and activate signal transduction pathways within the cell. Modeling how an agonist activates such a receptor is fundamental for an understanding of a wide variety of physiological processes and it is of tremendous value for pharmacology and drug design. Inelastic electron tunneling spectroscopy (IETS) has been proposed as a model for the mechanism by which olfactory GPCRs are activated by a bound agonist. We apply this hyothesis to GPCRs within the mammalian nervous system using quantum chemical modeling. We found that non-endogenous agonists of the serotonin receptor share a particular IET spectral aspect both amongst each other and with the serotonin molecule: a peak whose intensity scales with the known agonist potencies. We propose an experiential validation of this model by utilizing lysergic acid dimethylamide (DAM-57), an ergot derivative, and its deuterated isotopologues; we also provide theoretical predictions for comparison to experiment. If validated our theory may provide new avenues for guided drug design and elevate methods of in silico potency/activity prediction. PMID:25909758</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22402252-su-qi-measurement-renal-pyruvate-lactate-exchange-hyperpolarized-mri','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22402252-su-qi-measurement-renal-pyruvate-lactate-exchange-hyperpolarized-mri"><span>SU-E-QI-11: Measurement of Renal Pyruvate-To-Lactate Exchange with Hyperpolarized 13C MRI</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Adamson, E; Johnson, K; Fain, S</p> <p></p> <p>Purpose: Previous work [1] modeling the metabolic flux between hyperpolarized [1-13C]pyruvate and [1-13C]lactate in magnetic resonance spectroscopic imaging (MRSI) experiments failed to account for vascular signal artifacts. Here, we investigate a method to minimize the vascular signal and its impact on the fidelity of metabolic modeling. Methods: MRSI was simulated for renal metabolism in MATLAB both with and without bipolar gradients. The resulting data were fit to a two-site exchange model [1], and the effects of vascular partial volume artifacts on kinetic modeling were assessed. Bipolar gradients were then incorporated into a gradient echo sequence to validate the simulations experimentally.more » The degree of diffusion weighting (b = 32 s/mm{sup 2}) was determined empirically from 1H imaging of murine renal vascular signal. The method was then tested in vivo using MRSI with bipolar gradients following injection of hyperpolarized [1-{sup 13}C]pyruvate (∼80 mM at 20% polarization). Results: In simulations, vascular signal contaminated the renal metabolic signal at resolutions as high as 2 × 2 mm{sup 2} due to partial volume effects. The apparent exchange rate from pyruvate to lactate (k{sub p}) was underestimated in the presence of these artifacts due to contaminating pyruvate signal. Incorporation of bipolar gradients suppressed vascular signal and improved the accuracy of kp estimation. Experimentally, the in vivo results supported the ability of bipolar gradients to suppress vascular signal. The in vivo exchange rate increased, as predicted in simulations, from k{sub p} = 0.012 s-{sup 1} to k{sub p} = 0.020-{sup 1} after vascular signal suppression. Conclusion: We have demonstrated the limited accuracy of the two-site exchange model in the presence of vascular partial volume artifacts. The addition of bipolar gradients suppressed vascular signal and improved model accuracy in simulations. Bipolar gradients largely affected kp estimation in vivo. Currently, slow-flowing spins in small vessels and capillaries are only partially suppressed, so further improvement is possible. Funding support: Seed Grant from the Radiological Society of North America, GE Healthcare, University of Wisconsin Graduate School.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26020784','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26020784"><span>Network reconstruction based on proteomic data and prior knowledge of protein connectivity using graph theory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Stavrakas, Vassilis; Melas, Ioannis N; Sakellaropoulos, Theodore; Alexopoulos, Leonidas G</p> <p>2015-01-01</p> <p>Modeling of signal transduction pathways is instrumental for understanding cells' function. People have been tackling modeling of signaling pathways in order to accurately represent the signaling events inside cells' biochemical microenvironment in a way meaningful for scientists in a biological field. In this article, we propose a method to interrogate such pathways in order to produce cell-specific signaling models. We integrate available prior knowledge of protein connectivity, in a form of a Prior Knowledge Network (PKN) with phosphoproteomic data to construct predictive models of the protein connectivity of the interrogated cell type. Several computational methodologies focusing on pathways' logic modeling using optimization formulations or machine learning algorithms have been published on this front over the past few years. Here, we introduce a light and fast approach that uses a breadth-first traversal of the graph to identify the shortest pathways and score proteins in the PKN, fitting the dependencies extracted from the experimental design. The pathways are then combined through a heuristic formulation to produce a final topology handling inconsistencies between the PKN and the experimental scenarios. Our results show that the algorithm we developed is efficient and accurate for the construction of medium and large scale signaling networks. We demonstrate the applicability of the proposed approach by interrogating a manually curated interaction graph model of EGF/TNFA stimulation against made up experimental data. To avoid the possibility of erroneous predictions, we performed a cross-validation analysis. Finally, we validate that the introduced approach generates predictive topologies, comparable to the ILP formulation. Overall, an efficient approach based on graph theory is presented herein to interrogate protein-protein interaction networks and to provide meaningful biological insights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015NIMPA.795..335D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015NIMPA.795..335D"><span>A robust hypothesis test for the sensitive detection of constant speed radiation moving sources</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence</p> <p>2015-09-01</p> <p>Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22310500','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22310500"><span>Fluorescence-based assay probing regulator of G protein signaling partner proteins.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, Po-Shiun; Yeh, Hsin-Sung; Yi, Hsiu-Ping; Lin, Chain-Jia; Yang, Chii-Shen</p> <p>2012-04-01</p> <p>The regulator of G protein signaling (RGS) proteins are one of the essential modulators for the G protein system. Besides regulating G protein signaling by accelerating the GTPase activity of Gα subunits, RGS proteins are implicated in exerting other functions; they are also known to be involved in several diseases. Moreover, the existence of a single RGS protein in plants and its seven-transmembrane domain found in 2003 triggered efforts to unveil detailed structural and functional information of RGS proteins. We present a method for real-time examination of the protein-protein interactions between RGS and Gα subunits. AtRGS1 from plants and RGS4 from mammals were site-directedly labeled with the fluorescent probe Lucifer yellow on engineered cysteine residues and used to interact with different Gα subunits. The physical interactions can be revealed by monitoring the real-time fluorescence changes (8.6% fluorescence increase in mammals and 27.6% in plants); their correlations to functional exertion were shown with a GTPase accelerating activity assay and further confirmed by measurement of K(d). We validate the effectiveness of this method and suggest its application to the exploration of more RGS signaling partner proteins in physiological and pathological studies. Copyright © 2012 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013APh....50...57B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013APh....50...57B"><span>Bayesian approach for counting experiment statistics applied to a neutrino point source analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bose, D.; Brayeur, L.; Casier, M.; de Vries, K. D.; Golup, G.; van Eijndhoven, N.</p> <p>2013-12-01</p> <p>In this paper we present a model independent analysis method following Bayesian statistics to analyse data from a generic counting experiment and apply it to the search for neutrinos from point sources. We discuss a test statistic defined following a Bayesian framework that will be used in the search for a signal. In case no signal is found, we derive an upper limit without the introduction of approximations. The Bayesian approach allows us to obtain the full probability density function for both the background and the signal rate. As such, we have direct access to any signal upper limit. The upper limit derivation directly compares with a frequentist approach and is robust in the case of low-counting observations. Furthermore, it allows also to account for previous upper limits obtained by other analyses via the concept of prior information without the need of the ad hoc application of trial factors. To investigate the validity of the presented Bayesian approach, we have applied this method to the public IceCube 40-string configuration data for 10 nearby blazars and we have obtained a flux upper limit, which is in agreement with the upper limits determined via a frequentist approach. Furthermore, the upper limit obtained compares well with the previously published result of IceCube, using the same data set.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996SPIE.2672...21Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996SPIE.2672...21Z"><span>Novel laser Doppler flowmeter for pulpal blood flow measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zang, De Yu; Millerd, James E.; Wilder-Smith, Petra B. B.; Arrastia-Jitosho, Anna-Marie A.</p> <p>1996-04-01</p> <p>We have proposed and experimentally demonstrated a new configuration of laser Doppler flowmetry for dental pulpal blood flow measurements. To date, the vitality of a tooth can be determined only by subjective thermal or electric tests, which are of questionable reliability and may induced pain in patient. Non-invasive techniques for determining pulpal vascular reactions to injury, treatment, and medication are in great demand. The laser Doppler flowmetry technique is non-invasive; however, clinical studies have shown that when used to measure pulpal blood flow the conventional back-scattering Doppler method suffers from low signal-to-noise ratio (SNR) and unreliable flux readings rendering it impossible to calibrate. A simplified theoretical model indicates that by using a forward scattered geometry the detected signal has a much higher SNR and can be calibrated. The forward scattered signal is readily detectable due to the fact that teeth are relatively thin organs with moderate optical loss. A preliminary experiment comparing forward scattered detection with conventional back- scattered detection was carried out using an extracted human molar. The results validated the findings of the simple theoretical model and clearly showed the utility of the forward scattering geometry. The back-scattering method had readings that fluctuated by as much as 187% in response to small changes in sensor position relative to the tooth. The forward scattered method had consistent readings (within 10%) that were independent of the sensor position, a signal-to-noise ratio that was at least 5.6 times higher than the back-scattering method, and a linear response to flow rate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006PhDT.......130H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006PhDT.......130H"><span>Theoretical and experimental study of low-finesse extrinsic Fabry-Perot interferometric fiber optic sensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Han, Ming</p> <p></p> <p>In this dissertation, detailed and systematic theoretical and experimental study of low-finesse extrinsic Fabry-Perot interferometric (EFPI) fiber optic sensors together with their signal processing methods for white-light systems are presented. The work aims to provide a better understanding of the operational principle of EFPI fiber optic sensors, and is useful and important in the design, optimization, fabrication and application of single mode fiber(SMF) EFPI (SMF-EFPI) and multimode fiber (MMF) EFPI (MMF-EFPI) sensor systems. The cases for SMF-EFPI and MMF-EFPI sensors are separately considered. In the analysis of SMF-EFPI sensors, the light transmitted in the fiber is approximated by a Gaussian beam and the obtained spectral transfer function of the sensors includes an extra phase shift due to the light coupling in the fiber end-face. This extra phase shift has not been addressed by previous researchers and is of great importance for high accuracy and high resolution signal processing of white-light SMF-EFPI systems. Fringe visibility degradation due to gap-length increase and sensor imperfections is studied. The results indicate that the fringe visibility of a SMF-EFPI sensor is relatively insensitive to the gap-length change and sensor imperfections. Based on the spectral fringe pattern predicated by the theory of SMF-EFPI sensors, a novel curve fitting signal processing method (Type 1 curve-fitting method) is presented for white-light SMF-EFPI sensor systems. Other spectral domain signal processing methods including the wavelength-tracking, the Type 2-3 curve fitting, Fourier transform, and two-point interrogation methods are reviewed and systematically analyzed. Experiments were carried out to compare the performances of these signal processing methods. The results have shown that the Type 1 curve fitting method achieves high accuracy, high resolution, large dynamic range, and the capability of absolute measurement at the same time, while others either have less resolution, or are not capable of absolute measurement. Previous mathematical models for MMF-EFPI sensors are all based on geometric optics; therefore their applications have many limitations. In this dissertation, a modal theory is developed that can be used in any situations and is more accurate. The mathematical description of the spectral fringes of MMF-EFPI sensors is obtained by the modal theory. Effect on the fringe visibility of system parameters, including the sensor head structure, the fiber parameters, and the mode power distribution in the MMF of the MMF-EFPI sensors, is analyzed. Experiments were carried out to validate the theory. Fundamental mechanism that causes the degradation of the fringe visibility in MMF-EFPI sensors are revealed. It is shown that, in some situations at which the fringe visibility is important and difficult to achieve, a simple method of launching the light into the MMF-EFPI sensor system from the output of a SMF could be used to improve the fringe visibility and to ease the fabrication difficulties of MMF-EFPI sensors. Signal processing methods that are well-understood in white-light SMF-EFPI sensor systems may exhibit new aspects when they are applied to white-light MMF-EFPI sensor systems. This dissertation reveals that the variations of mode power distribution (MPD) in the MMF could cause phase variations of the spectral fringes from a MMF-EFPI sensor and introduce measurement errors for a signal processing method in which the phase information is used. This MPD effect on the wavelength-tracking method in white-light MMF-EFPI sensors is theoretically analyzed. The fringe phases changes caused by MPD variations were experimentally observed and thus the MFD effect is validated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26177817','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26177817"><span>Advances in heart rate variability signal analysis: joint position statement by the e-Cardiology ESC Working Group and the European Heart Rhythm Association co-endorsed by the Asia Pacific Heart Rhythm Society.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sassi, Roberto; Cerutti, Sergio; Lombardi, Federico; Malik, Marek; Huikuri, Heikki V; Peng, Chung-Kang; Schmidt, Georg; Yamamoto, Yoshiharu</p> <p>2015-09-01</p> <p>Following the publication of the Task Force document on heart rate variability (HRV) in 1996, a number of articles have been published to describe new HRV methodologies and their application in different physiological and clinical studies. This document presents a critical review of the new methods. A particular attention has been paid to methodologies that have not been reported in the 1996 standardization document but have been more recently tested in sufficiently sized populations. The following methods were considered: Long-range correlation and fractal analysis; Short-term complexity; Entropy and regularity; and Nonlinear dynamical systems and chaotic behaviour. For each of these methods, technical aspects, clinical achievements, and suggestions for clinical application were reviewed. While the novel approaches have contributed in the technical understanding of the signal character of HRV, their success in developing new clinical tools, such as those for the identification of high-risk patients, has been rather limited. Available results obtained in selected populations of patients by specialized laboratories are nevertheless of interest but new prospective studies are needed. The investigation of new parameters, descriptive of the complex regulation mechanisms of heart rate, has to be encouraged because not all information in the HRV signal is captured by traditional methods. The new technologies thus could provide after proper validation, additional physiological, and clinical meaning. Multidisciplinary dialogue and specialized courses in the combination of clinical cardiology and complex signal processing methods seem warranted for further advances in studies of cardiac oscillations and in the understanding normal and abnormal cardiac control processes. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010PhyC..470.1860H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010PhyC..470.1860H"><span>The contactless detection of local normal transitions in superconducting coils by using Poynting’s vector method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Habu, K.; Kaminohara, S.; Kimoto, T.; Kawagoe, A.; Sumiyoshi, F.; Okamoto, H.</p> <p>2010-11-01</p> <p>We have developed a new monitoring system to detect an unusual event in the superconducting coils without direct contact on the coils, using Poynting's vector method. In this system, the potential leads and pickup coils are set around the superconducting coils to measure local electric and magnetic fields, respectively. By measuring the sets of magnetic and electric fields, the Poynting's vectors around the coil can be obtained. An unusual event in the coil can be detected as the result of the change of the Poynting's vector. This system has no risk of the voltage breakdown which may happen with the balance voltage method, because there is no need of direct contacts on the coil windings. In a previous paper, we have demonstrated that our system can detect the normal transitions in the Bi-2223 coil without direct contact on the coil windings by using a small test system. For our system to be applied to practical devices, it is necessary for the early detection of an unusual event in the coils to be able to detect local normal transitions in the coils. The signal voltages of the small sensors to measure local magnetic and electric fields are small. Although the increase in signals of the pickup coils is attained easily by an increase in the number of turns of the pickup coils, an increase in the signals of the potential lead is not easily attained. In this paper, a new method to amplify the signal of local electric fields around the coil is proposed. The validity of the method has been confirmed by measuring local electric fields around the Bi-2223 coil.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MSSP...98...16Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MSSP...98...16Z"><span>Instantaneous speed jitter detection via encoder signal and its application for the diagnosis of planetary gearbox</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhao, Ming; Jia, Xiaodong; Lin, Jing; Lei, Yaguo; Lee, Jay</p> <p>2018-01-01</p> <p>In modern rotating machinery, rotary encoders have been widely used for the purpose of positioning and dynamic control. The study in this paper indicates that, the encoder signal, after proper processing, can be also effectively used for the health monitoring of rotating machines. In this work, a Kurtosis-guided local polynomial differentiator (KLPD) is proposed to estimate the instantaneous angular speed (IAS) of rotating machines based on the encoder signal. Compared with the central difference method, the KLPD is more robust to noise and it is able to precisely capture the weak speed jitters introduced by mechanical defects. The fault diagnosis of planetary gearbox has proven to be a challenging issue in both industry and academia. Based on the proposed KLPD, a systematic method for the fault diagnosis of planetary gearbox is proposed. In this method, residual time synchronous time averaging (RTSA) is first employed to remove the operation-related IAS components that come from normal gear meshing and non-stationary load variations, KLPD is then utilized to detect and enhance the speed jitter from the IAS residual in a data-driven manner. The effectiveness of proposed method has been validated by both simulated data and experimental data. The results demonstrate that the proposed KLPD-RTSA could not only detect fault signatures but also identify defective components, thus providing a promising tool for the health monitoring of planetary gearbox.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25244253','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25244253"><span>Electrically-evoked frequency-following response (EFFR) in the auditory brainstem of guinea pigs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>He, Wenxin; Ding, Xiuyong; Zhang, Ruxiang; Chen, Jing; Zhang, Daoxing; Wu, Xihong</p> <p>2014-01-01</p> <p>It is still a difficult clinical issue to decide whether a patient is a suitable candidate for a cochlear implant and to plan postoperative rehabilitation, especially for some special cases, such as auditory neuropathy. A partial solution to these problems is to preoperatively evaluate the functional integrity of the auditory neural pathways. For evaluating the strength of phase-locking of auditory neurons, which was not reflected in previous methods using electrically evoked auditory brainstem response (EABR), a new method for recording phase-locking related auditory responses to electrical stimulation, called the electrically evoked frequency-following response (EFFR), was developed and evaluated using guinea pigs. The main objective was to assess feasibility of the method by testing whether the recorded signals reflected auditory neural responses or artifacts. The results showed the following: 1) the recorded signals were evoked by neuron responses rather than by artifact; 2) responses evoked by periodic signals were significantly higher than those evoked by the white noise; 3) the latency of the responses fell in the expected range; 4) the responses decreased significantly after death of the guinea pigs; and 5) the responses decreased significantly when the animal was replaced by an electrical resistance. All of these results suggest the method was valid. Recording obtained using complex tones with a missing fundamental component and using pure tones with various frequencies were consistent with those obtained using acoustic stimulation in previous studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSV...414..174L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSV...414..174L"><span>An identification method of orbit responses rooting in vibration analysis of rotor during touchdowns of active magnetic bearings</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Tao; Lyu, Mindong; Wang, Zixi; Yan, Shaoze</p> <p>2018-02-01</p> <p>Identification of orbit responses can make the active protection operation more easily realize for active magnetic bearings (AMB) in case of touchdowns. This paper presents an identification method of the orbit responses rooting on signal processing of rotor displacements during touchdowns. The recognition method consists of two major steps. Firstly, the combined rub and bouncing is distinguished from the other orbit responses by the mathematical expectation of axis displacements of the rotor. Because when the combined rub and bouncing occurs, the rotor of AMB will not be always close to the touchdown bearings (TDB). Secondly, we recognize the pendulum vibration and the full rub by the Fourier spectrum of displacement in horizontal direction, as the frequency characteristics of the two responses are different. The principle of the whole identification algorithm is illustrated by two sets of signal generated by a dynamic model of the specific rotor-TDB system. The universality of the method is validated by other four sets of signal. Besides, the adaptability of noise is also tested by adding white noises with different strengths, and the result is promising. As the mathematical expectation and Discrete Fourier transform are major calculations of the algorithm, the calculation quantity of the algorithm is low, so it is fast, easily realized and embedded in the AMB controller, which has an important engineering value for the protection of AMBs during touchdowns.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27239191','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27239191"><span>A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra</p> <p>2016-01-01</p> <p>Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4863096','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4863096"><span>A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pedrocchi, Alessandra</p> <p>2016-01-01</p> <p>Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting “building blocks” into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MeScT..28d5011D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MeScT..28d5011D"><span>Self adaptive multi-scale morphology AVG-Hat filter and its application to fault feature extraction for wheel bearing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Deng, Feiyue; Yang, Shaopu; Tang, Guiji; Hao, Rujiang; Zhang, Mingliang</p> <p>2017-04-01</p> <p>Wheel bearings are essential mechanical components of trains, and fault detection of the wheel bearing is of great significant to avoid economic loss and casualty effectively. However, considering the operating conditions, detection and extraction of the fault features hidden in the heavy noise of the vibration signal have become a challenging task. Therefore, a novel method called adaptive multi-scale AVG-Hat morphology filter (MF) is proposed to solve it. The morphology AVG-Hat operator not only can suppress the interference of the strong background noise greatly, but also enhance the ability of extracting fault features. The improved envelope spectrum sparsity (IESS), as a new evaluation index, is proposed to select the optimal filtering signal processed by the multi-scale AVG-Hat MF. It can present a comprehensive evaluation about the intensity of fault impulse to the background noise. The weighted coefficients of the different scale structural elements (SEs) in the multi-scale MF are adaptively determined by the particle swarm optimization (PSO) algorithm. The effectiveness of the method is validated by analyzing the real wheel bearing fault vibration signal (e.g. outer race fault, inner race fault and rolling element fault). The results show that the proposed method could improve the performance in the extraction of fault features effectively compared with the multi-scale combined morphological filter (CMF) and multi-scale morphology gradient filter (MGF) methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1009726-computational-prediction-type-iii-iv-secreted-effectors-gram-negative-bacteria','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1009726-computational-prediction-type-iii-iv-secreted-effectors-gram-negative-bacteria"><span>Computational prediction of type III and IV secreted effectors in Gram-negative bacteria</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>McDermott, Jason E.; Corrigan, Abigail L.; Peterson, Elena S.</p> <p>2011-01-01</p> <p>In this review, we provide an overview of the methods employed by four recent papers that described novel methods for computational prediction of secreted effectors from type III and IV secretion systems in Gram-negative bacteria. The results of the studies in terms of performance at accurately predicting secreted effectors and similarities found between secretion signals that may reflect biologically relevant features for recognition. We discuss the web-based tools for secreted effector prediction described in these studies and announce the availability of our tool, the SIEVEserver (http://www.biopilot.org). Finally, we assess the accuracy of the three type III effector prediction methods onmore » a small set of proteins not known prior to the development of these tools that we have recently discovered and validated using both experimental and computational approaches. Our comparison shows that all methods use similar approaches and, in general arrive at similar conclusions. We discuss the possibility of an order-dependent motif in the secretion signal, which was a point of disagreement in the studies. Our results show that there may be classes of effectors in which the signal has a loosely defined motif, and others in which secretion is dependent only on compositional biases. Computational prediction of secreted effectors from protein sequences represents an important step toward better understanding the interaction between pathogens and hosts.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyD..374...45G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyD..374...45G"><span>Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.</p> <p>2018-07-01</p> <p>In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4268031','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4268031"><span>A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.</p> <p>2014-01-01</p> <p>Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9599E..0CK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9599E..0CK"><span>HEVC for high dynamic range services</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, Seung-Hwan; Zhao, Jie; Misra, Kiran; Segall, Andrew</p> <p>2015-09-01</p> <p>Displays capable of showing a greater range of luminance values can render content containing high dynamic range information in a way such that the viewers have a more immersive experience. This paper introduces the design aspects of a high dynamic range (HDR) system, and examines the performance of the HDR processing chain in terms of compression efficiency. Specifically it examines the relation between recently introduced Society of Motion Picture and Television Engineers (SMPTE) ST 2084 transfer function and the High Efficiency Video Coding (HEVC) standard. SMPTE ST 2084 is designed to cover the full range of an HDR signal from 0 to 10,000 nits, however in many situations the valid signal range of actual video might be smaller than SMPTE ST 2084 supported range. The above restricted signal range results in restricted range of code values for input video data and adversely impacts compression efficiency. In this paper, we propose a code value remapping method that extends the restricted range code values into the full range code values so that the existing standards such as HEVC may better compress the video content. The paper also identifies related non-normative encoder-only changes that are required for remapping method for a fair comparison with anchor. Results are presented comparing the efficiency of the current approach versus the proposed remapping method for HM-16.2.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940031369','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940031369"><span>Adaptive model reduction for continuous systems via recursive rational interpolation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lilly, John H.</p> <p>1994-01-01</p> <p>A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>