Sample records for signal processing techniques

  1. Contemporary ultrasonic signal processing approaches for nondestructive evaluation of multilayered structures

    NASA Astrophysics Data System (ADS)

    Zhang, Guang-Ming; Harvey, David M.

    2012-03-01

    Various signal processing techniques have been used for the enhancement of defect detection and defect characterisation. Cross-correlation, filtering, autoregressive analysis, deconvolution, neural network, wavelet transform and sparse signal representations have all been applied in attempts to analyse ultrasonic signals. In ultrasonic nondestructive evaluation (NDE) applications, a large number of materials have multilayered structures. NDE of multilayered structures leads to some specific problems, such as penetration, echo overlap, high attenuation and low signal-to-noise ratio. The signals recorded from a multilayered structure are a class of very special signals comprised of limited echoes. Such signals can be assumed to have a sparse representation in a proper signal dictionary. Recently, a number of digital signal processing techniques have been developed by exploiting the sparse constraint. This paper presents a review of research to date, showing the up-to-date developments of signal processing techniques made in ultrasonic NDE. A few typical ultrasonic signal processing techniques used for NDE of multilayered structures are elaborated. The practical applications and limitations of different signal processing methods in ultrasonic NDE of multilayered structures are analysed.

  2. Digital Signal Processing Based Biotelemetry Receivers

    NASA Technical Reports Server (NTRS)

    Singh, Avtar; Hines, John; Somps, Chris

    1997-01-01

    This is an attempt to develop a biotelemetry receiver using digital signal processing technology and techniques. The receiver developed in this work is based on recovering signals that have been encoded using either Pulse Position Modulation (PPM) or Pulse Code Modulation (PCM) technique. A prototype has been developed using state-of-the-art digital signal processing technology. A Printed Circuit Board (PCB) is being developed based on the technique and technology described here. This board is intended to be used in the UCSF Fetal Monitoring system developed at NASA. The board is capable of handling a variety of PPM and PCM signals encoding signals such as ECG, temperature, and pressure. A signal processing program has also been developed to analyze the received ECG signal to determine heart rate. This system provides a base for using digital signal processing in biotelemetry receivers and other similar applications.

  3. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    PubMed Central

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  4. Integrated Data and Control Level Fault Tolerance Techniques for Signal Processing Computer Design

    DTIC Science & Technology

    1990-09-01

    TOLERANCE TECHNIQUES FOR SIGNAL PROCESSING COMPUTER DESIGN G. Robert Redinbo I. INTRODUCTION High-speed signal processing is an important application of...techniques and mathematical approaches will be expanded later to the situation where hardware errors and roundoff and quantization noise affect all...detect errors equal in number to the degree of g(X), the maximum permitted by the Singleton bound [13]. Real cyclic codes, primarily applicable to

  5. Jitter model and signal processing techniques for pulse width modulation optical recording

    NASA Technical Reports Server (NTRS)

    Liu, Max M.-K.

    1991-01-01

    A jitter model and signal processing techniques are discussed for data recovery in Pulse Width Modulation (PWM) optical recording. In PWM, information is stored through modulating sizes of sequential marks alternating in magnetic polarization or in material structure. Jitter, defined as the deviation from the original mark size in the time domain, will result in error detection if it is excessively large. A new approach is taken in data recovery by first using a high speed counter clock to convert time marks to amplitude marks, and signal processing techniques are used to minimize jitter according to the jitter model. The signal processing techniques include motor speed and intersymbol interference equalization, differential and additive detection, and differential and additive modulation.

  6. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    NASA Astrophysics Data System (ADS)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  7. Fast and economic signal processing technique of laser diode self-mixing interferometry for nanoparticle size measurement

    NASA Astrophysics Data System (ADS)

    Wang, Huarui; Shen, Jianqi

    2014-05-01

    The size of nanoparticles is measured by laser diode self-mixing interferometry, which employs a sensitive, compact, and simple optical setup. However, the signal processing of the interferometry is slow or expensive. In this article, a fast and economic signal processing technique is introduced, in which the self-mixing AC signal is transformed into DC signals with an analog circuit consisting of 16 channels. These DC signals are obtained as a spectrum from which the size of nanoparticles can be retrieved. The technique is examined by measuring the standard nanoparticles. Further experiments are performed to compare the skimmed milk and whole milk, and also the fresh skimmed milk and rotten skimmed milk.

  8. The physics of bat echolocation: Signal processing techniques

    NASA Astrophysics Data System (ADS)

    Denny, Mark

    2004-12-01

    The physical principles and signal processing techniques underlying bat echolocation are investigated. It is shown, by calculation and simulation, how the measured echolocation performance of bats can be achieved.

  9. A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals.

    PubMed

    Bashashati, Ali; Fatourechi, Mehrdad; Ward, Rabab K; Birch, Gary E

    2007-06-01

    Brain-computer interfaces (BCIs) aim at providing a non-muscular channel for sending commands to the external world using the electroencephalographic activity or other electrophysiological measures of the brain function. An essential factor in the successful operation of BCI systems is the methods used to process the brain signals. In the BCI literature, however, there is no comprehensive review of the signal processing techniques used. This work presents the first such comprehensive survey of all BCI designs using electrical signal recordings published prior to January 2006. Detailed results from this survey are presented and discussed. The following key research questions are addressed: (1) what are the key signal processing components of a BCI, (2) what signal processing algorithms have been used in BCIs and (3) which signal processing techniques have received more attention?

  10. TOPICAL REVIEW: A survey of signal processing algorithms in brain computer interfaces based on electrical brain signals

    NASA Astrophysics Data System (ADS)

    Bashashati, Ali; Fatourechi, Mehrdad; Ward, Rabab K.; Birch, Gary E.

    2007-06-01

    Brain computer interfaces (BCIs) aim at providing a non-muscular channel for sending commands to the external world using the electroencephalographic activity or other electrophysiological measures of the brain function. An essential factor in the successful operation of BCI systems is the methods used to process the brain signals. In the BCI literature, however, there is no comprehensive review of the signal processing techniques used. This work presents the first such comprehensive survey of all BCI designs using electrical signal recordings published prior to January 2006. Detailed results from this survey are presented and discussed. The following key research questions are addressed: (1) what are the key signal processing components of a BCI, (2) what signal processing algorithms have been used in BCIs and (3) which signal processing techniques have received more attention?

  11. An overview of data acquisition, signal coding and data analysis techniques for MST radars

    NASA Technical Reports Server (NTRS)

    Rastogi, P. K.

    1986-01-01

    An overview is given of the data acquisition, signal processing, and data analysis techniques that are currently in use with high power MST/ST (mesosphere stratosphere troposphere/stratosphere troposphere) radars. This review supplements the works of Rastogi (1983) and Farley (1984) presented at previous MAP workshops. A general description is given of data acquisition and signal processing operations and they are characterized on the basis of their disparate time scales. Then signal coding, a brief description of frequently used codes, and their limitations are discussed, and finally, several aspects of statistical data processing such as signal statistics, power spectrum and autocovariance analysis, outlier removal techniques are discussed.

  12. Hybrid Signal Processing Technique to Improve the Defect Estimation in Ultrasonic Non-Destructive Testing of Composite Structures

    PubMed Central

    Raisutis, Renaldas; Samaitis, Vykintas

    2017-01-01

    This work proposes a novel hybrid signal processing technique to extract information on disbond-type defects from a single B-scan in the process of non-destructive testing (NDT) of glass fiber reinforced plastic (GFRP) material using ultrasonic guided waves (GW). The selected GFRP sample has been a segment of wind turbine blade, which possessed an aerodynamic shape. Two disbond type defects having diameters of 15 mm and 25 mm were artificially constructed on its trailing edge. The experiment has been performed using the low-frequency ultrasonic system developed at the Ultrasound Institute of Kaunas University of Technology and only one side of the sample was accessed. A special configuration of the transmitting and receiving transducers fixed on a movable panel with a separation distance of 50 mm was proposed for recording the ultrasonic guided wave signals at each one-millimeter step along the scanning distance up to 500 mm. Finally, the hybrid signal processing technique comprising the valuable features of the three most promising signal processing techniques: cross-correlation, wavelet transform, and Hilbert–Huang transform has been applied to the received signals for the extraction of defects information from a single B-scan image. The wavelet transform and cross-correlation techniques have been combined in order to extract the approximated size and location of the defects and measurements of time delays. Thereafter, Hilbert–Huang transform has been applied to the wavelet transformed signal to compare the variation of instantaneous frequencies and instantaneous amplitudes of the defect-free and defective signals. PMID:29232845

  13. Applying traditional signal processing techniques to social media exploitation for situational understanding

    NASA Astrophysics Data System (ADS)

    Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.

    2016-05-01

    Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.

  14. Correlation processing for correction of phase distortions in subaperture imaging.

    PubMed

    Tavh, B; Karaman, M

    1999-01-01

    Ultrasonic subaperture imaging combines synthetic aperture and phased array approaches and permits low-cost systems with improved image quality. In subaperture processing, a large array is synthesized using echo signals collected from a number of receive subapertures by multiple firings of a phased transmit subaperture. Tissue inhomogeneities and displacements in subaperture imaging may cause significant phase distortions on received echo signals. Correlation processing on reference echo signals can be used for correction of the phase distortions, for which the accuracy and robustness are critically limited by the signal correlation. In this study, we explore correlation processing techniques for adaptive subaperture imaging with phase correction for motion and tissue inhomogeneities. The proposed techniques use new subaperture data acquisition schemes to produce reference signal sets with improved signal correlation. The experimental test results were obtained using raw radio frequency (RF) data acquired from two different phantoms with 3.5 MHz, 128-element transducer array. The results show that phase distortions can effectively be compensated by the proposed techniques in real-time adaptive subaperture imaging.

  15. Real-time Nyquist signaling with dynamic precision and flexible non-integer oversampling.

    PubMed

    Schmogrow, R; Meyer, M; Schindler, P C; Nebendahl, B; Dreschmann, M; Meyer, J; Josten, A; Hillerkuss, D; Ben-Ezra, S; Becker, J; Koos, C; Freude, W; Leuthold, J

    2014-01-13

    We demonstrate two efficient processing techniques for Nyquist signals, namely computation of signals using dynamic precision as well as arbitrary rational oversampling factors. With these techniques along with massively parallel processing it becomes possible to generate and receive high data rate Nyquist signals with flexible symbol rates and bandwidths, a feature which is highly desirable for novel flexgrid networks. We achieved maximum bit rates of 252 Gbit/s in real-time.

  16. Frequency-wavenumber processing for infrasound distributed arrays.

    PubMed

    Costley, R Daniel; Frazier, W Garth; Dillion, Kevin; Picucci, Jennifer R; Williams, Jay E; McKenna, Mihan H

    2013-10-01

    The work described herein discusses the application of a frequency-wavenumber signal processing technique to signals from rectangular infrasound arrays for detection and estimation of the direction of travel of infrasound. Arrays of 100 sensors were arranged in square configurations with sensor spacing of 2 m. Wind noise data were collected at one site. Synthetic infrasound signals were superposed on top of the wind noise to determine the accuracy and sensitivity of the technique with respect to signal-to-noise ratio. The technique was then applied to an impulsive event recorded at a different site. Preliminary results demonstrated the feasibility of this approach.

  17. Application of Mathematical Signal Processing Techniques to Mission Systems. (l’Application des techniques mathematiques du traitement du signal aux systemes de conduite des missions)

    DTIC Science & Technology

    1999-11-01

    represents the linear time invariant (LTI) response of the combined analysis /synthesis system while the second repre- sents the aliasing introduced into...effectively to implement voice scrambling systems based on time - frequency permutation . The most general form of such a system is shown in Fig. 22 where...92201 NEUILLY-SUR-SEINE CEDEX, FRANCE RTO LECTURE SERIES 216 Application of Mathematical Signal Processing Techniques to Mission Systems (1

  18. Punch stretching process monitoring using acoustic emission signal analysis. II - Application of frequency domain deconvolution

    NASA Technical Reports Server (NTRS)

    Liang, Steven Y.; Dornfeld, David A.; Nickerson, Jackson A.

    1987-01-01

    The coloring effect on the acoustic emission signal due to the frequency response of the data acquisition/processing instrumentation may bias the interpretation of AE signal characteristics. In this paper, a frequency domain deconvolution technique, which involves the identification of the instrumentation transfer functions and multiplication of the AE signal spectrum by the inverse of these system functions, has been carried out. In this way, the change in AE signal characteristics can be better interpreted as the result of the change in only the states of the process. Punch stretching process was used as an example to demonstrate the application of the technique. Results showed that, through the deconvolution, the frequency characteristics of AE signals generated during the stretching became more distinctive and can be more effectively used as tools for process monitoring.

  19. All-optical signal processing using dynamic Brillouin gratings

    PubMed Central

    Santagiustina, Marco; Chin, Sanghoon; Primerov, Nicolay; Ursini, Leonora; Thévenaz, Luc

    2013-01-01

    The manipulation of dynamic Brillouin gratings in optical fibers is demonstrated to be an extremely flexible technique to achieve, with a single experimental setup, several all-optical signal processing functions. In particular, all-optical time differentiation, time integration and true time reversal are theoretically predicted, and then numerically and experimentally demonstrated. The technique can be exploited to process both photonic and ultra-wide band microwave signals, so enabling many applications in photonics and in radio science. PMID:23549159

  20. Acoustic emission signal processing technique to characterize reactor in-pile phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek, E-mail: vivek.agarwal@inl.gov; Tawfik, Magdy S., E-mail: magdy.tawfik@inl.gov; Smith, James A., E-mail: james.smith@inl.gov

    2015-03-31

    Existing and developing advanced sensor technologies and instrumentation will allow non-intrusive in-pile measurement of temperature, extension, and fission gases when coupled with advanced signal processing algorithms. The transmitted measured sensor signals from inside to the outside of containment structure are corrupted by noise and are attenuated, thereby reducing the signal strength and the signal-to-noise ratio. Identification and extraction of actual signal (representative of an in-pile phenomenon) is a challenging and complicated process. In the paper, empirical mode decomposition technique is utilized to reconstruct actual sensor signal by partially combining intrinsic mode functions. Reconstructed signal will correspond to phenomena and/or failuremore » modes occurring inside the reactor. In addition, it allows accurate non-intrusive monitoring and trending of in-pile phenomena.« less

  1. Asymmetric Dual-Band Tracking Technique for Optimal Joint Processing of BDS B1I and B1C Signals

    PubMed Central

    Wang, Chuhan; Cui, Xiaowei; Ma, Tianyi; Lu, Mingquan

    2017-01-01

    Along with the rapid development of the Global Navigation Satellite System (GNSS), satellite navigation signals have become more diversified, complex, and agile in adapting to increasing market demands. Various techniques have been developed for processing multiple navigation signals to achieve better performance in terms of accuracy, sensitivity, and robustness. This paper focuses on a technique for processing two signals with separate but adjacent center frequencies, such as B1I and B1C signals in the BeiDou global system. The two signals may differ in modulation scheme, power, and initial phase relation and can be processed independently by user receivers; however, the propagation delays of the two signals from a satellite are nearly identical as they are modulated on adjacent frequencies, share the same reference clock, and undergo nearly identical propagation paths to the receiver, resulting in strong coherence between the two signals. Joint processing of these signals can achieve optimal measurement performance due to the increased Gabor bandwidth and power. In this paper, we propose a universal scheme of asymmetric dual-band tracking (ASYM-DBT) to take advantage of the strong coherence, the increased Gabor bandwidth, and power of the two signals in achieving much-reduced thermal noise and more accurate ranging results when compared with the traditional single-band algorithm. PMID:29035350

  2. Asymmetric Dual-Band Tracking Technique for Optimal Joint Processing of BDS B1I and B1C Signals.

    PubMed

    Wang, Chuhan; Cui, Xiaowei; Ma, Tianyi; Zhao, Sihao; Lu, Mingquan

    2017-10-16

    Along with the rapid development of the Global Navigation Satellite System (GNSS), satellite navigation signals have become more diversified, complex, and agile in adapting to increasing market demands. Various techniques have been developed for processing multiple navigation signals to achieve better performance in terms of accuracy, sensitivity, and robustness. This paper focuses on a technique for processing two signals with separate but adjacent center frequencies, such as B1I and B1C signals in the BeiDou global system. The two signals may differ in modulation scheme, power, and initial phase relation and can be processed independently by user receivers; however, the propagation delays of the two signals from a satellite are nearly identical as they are modulated on adjacent frequencies, share the same reference clock, and undergo nearly identical propagation paths to the receiver, resulting in strong coherence between the two signals. Joint processing of these signals can achieve optimal measurement performance due to the increased Gabor bandwidth and power. In this paper, we propose a universal scheme of asymmetric dual-band tracking (ASYM-DBT) to take advantage of the strong coherence, the increased Gabor bandwidth, and power of the two signals in achieving much-reduced thermal noise and more accurate ranging results when compared with the traditional single-band algorithm.

  3. An intelligent signal processing and pattern recognition technique for defect identification using an active sensor network

    NASA Astrophysics Data System (ADS)

    Su, Zhongqing; Ye, Lin

    2004-08-01

    The practical utilization of elastic waves, e.g. Rayleigh-Lamb waves, in high-performance structural health monitoring techniques is somewhat impeded due to the complicated wave dispersion phenomena, the existence of multiple wave modes, the high susceptibility to diverse interferences, the bulky sampled data and the difficulty in signal interpretation. An intelligent signal processing and pattern recognition (ISPPR) approach using the wavelet transform and artificial neural network algorithms was developed; this was actualized in a signal processing package (SPP). The ISPPR technique comprehensively functions as signal filtration, data compression, characteristic extraction, information mapping and pattern recognition, capable of extracting essential yet concise features from acquired raw wave signals and further assisting in structural health evaluation. For validation, the SPP was applied to the prediction of crack growth in an alloy structural beam and construction of a damage parameter database for defect identification in CF/EP composite structures. It was clearly apparent that the elastic wave propagation-based damage assessment could be dramatically streamlined by introduction of the ISPPR technique.

  4. Real time automatic detection of bearing fault in induction machine using kurtogram analysis.

    PubMed

    Tafinine, Farid; Mokrani, Karim

    2012-11-01

    A proposed signal processing technique for incipient real time bearing fault detection based on kurtogram analysis is presented in this paper. The kurtogram is a fourth-order spectral analysis tool introduced for detecting and characterizing non-stationarities in a signal. This technique starts from investigating the resonance signatures over selected frequency bands to extract the representative features. The traditional spectral analysis is not appropriate for non-stationary vibration signal and for real time diagnosis. The performance of the proposed technique is examined by a series of experimental tests corresponding to different bearing conditions. Test results show that this signal processing technique is an effective bearing fault automatic detection method and gives a good basis for an integrated induction machine condition monitor.

  5. Application of Advanced Signal Processing Techniques to Angle of Arrival Estimation in ATC Navigation and Surveillance Systems

    DTIC Science & Technology

    1982-06-23

    Administration Systems Research and Development Service 14, Spseq Aese Ce ’ Washington, D.C. 20591 It. SeppkW•aae metm The work reported in this document was...consider sophisticated signal processing techniques as an alternative method of improving system performanceH Some work in this area has already taken place...demands on the frequency spectrum. As noted in Table 1-1, there has been considerable work on advanced signal processing in the MLS context

  6. New signal processing technique for density profile reconstruction using reflectometry.

    PubMed

    Clairet, F; Ricaud, B; Briolle, F; Heuraux, S; Bottereau, C

    2011-08-01

    Reflectometry profile measurement requires an accurate determination of the plasma reflected signal. Along with a good resolution and a high signal to noise ratio of the phase measurement, adequate data analysis is required. A new data processing based on time-frequency tomographic representation is used. It provides a clearer separation between multiple components and improves isolation of the relevant signals. In this paper, this data processing technique is applied to two sets of signals coming from two different reflectometer devices used on the Tore Supra tokamak. For the standard density profile reflectometry, it improves the initialization process and its reliability, providing a more accurate profile determination in the far scrape-off layer with density measurements as low as 10(16) m(-1). For a second reflectometer, which provides measurements in front of a lower hybrid launcher, this method improves the separation of the relevant plasma signal from multi-reflection processes due to the proximity of the plasma.

  7. Non-invasive assessment of skeletal muscle activity

    NASA Astrophysics Data System (ADS)

    Merletti, Roberto; Orizio, Claudio; di Prampero, Pietro E.; Tesch, Per

    2005-10-01

    After the first 3 years (2002-2005), the MAP project has made available: - systems fo electrodes, signal conditioning and digital processing for multichannel simultaneously-detected EMG and MMG as well as for simultaneous electrical stimulation and EMG detection with artifact cancellation. - innovative non-invasive techniques for the extraction of individual motor unit action potentials (MUAPS) and individual motor and MMG contributions from the surface EMG interference signal and the MMG signal. - processing techniques for extractions of indicators of progressive fatigue from the electrically-elicited (M-wave) EMG signal. - techniques for the analysis of dynamic multichannel EMG during cyclic or explosive exercise (in collaboration with project EXER/MAP-MED-027).

  8. Device design and signal processing for multiple-input multiple-output multimode fiber links

    NASA Astrophysics Data System (ADS)

    Appaiah, Kumar; Vishwanath, Sriram; Bank, Seth R.

    2012-01-01

    Multimode fibers (MMFs) are limited in data rate capabilities owing to modal dispersion. However, their large core diameter simplifies alignment and packaging, and makes them attractive for short and medium length links. Recent research has shown that the use of signal processing and techniques such as multiple-input multiple-output (MIMO) can greatly improve the data rate capabilities of multimode fibers. In this paper, we review recent experimental work using MIMO and signal processing for multimode fibers, and the improvements in data rates achievable with these techniques. We then present models to design as well as simulate the performance benefits obtainable with arrays of lasers and detectors in conjunction with MIMO, using channel capacity as the metric to optimize. We also discuss some aspects related to complexity of the algorithms needed for signal processing and discuss techniques for low complexity implementation.

  9. Unfolding and unfoldability of digital pulses in the z-domain

    NASA Astrophysics Data System (ADS)

    Regadío, Alberto; Sánchez-Prieto, Sebastián

    2018-04-01

    The unfolding (or deconvolution) technique is used in the development of digital pulse processing systems applied to particle detection. This technique is applied to digital signals obtained by digitization of analog signals that represent the combined response of the particle detectors and the associated signal conditioning electronics. This work describes a technique to determine if the signal is unfoldable. For unfoldable signals the characteristics of the unfolding system (unfolder) are presented. Finally, examples of the method applied to real experimental setup are discussed.

  10. Employment of adaptive learning techniques for the discrimination of acoustic emissions

    NASA Astrophysics Data System (ADS)

    Erkes, J. W.; McDonald, J. F.; Scarton, H. A.; Tam, K. C.; Kraft, R. P.

    1983-11-01

    The following aspects of this study on the discrimination of acoustic emissions (AE) were examined: (1) The analytical development and assessment of digital signal processing techniques for AE signal dereverberation, noise reduction, and source characterization; (2) The modeling and verification of some aspects of key selected techniques through a computer-based simulation; and (3) The study of signal propagation physics and their effect on received signal characteristics for relevant physical situations.

  11. UCMS - A new signal parameter measurement system using digital signal processing techniques. [User Constraint Measurement System

    NASA Technical Reports Server (NTRS)

    Choi, H. J.; Su, Y. T.

    1986-01-01

    The User Constraint Measurement System (UCMS) is a hardware/software package developed by NASA Goddard to measure the signal parameter constraints of the user transponder in the TDRSS environment by means of an all-digital signal sampling technique. An account is presently given of the features of UCMS design and of its performance capabilities and applications; attention is given to such important aspects of the system as RF interface parameter definitions, hardware minimization, the emphasis on offline software signal processing, and end-to-end link performance. Applications to the measurement of other signal parameters are also discussed.

  12. Optical rangefinding applications using communications modulation technique

    NASA Astrophysics Data System (ADS)

    Caplan, William D.; Morcom, Christopher John

    2010-10-01

    A novel range detection technique combines optical pulse modulation patterns with signal cross-correlation to produce an accurate range estimate from low power signals. The cross-correlation peak is analyzed by a post-processing algorithm such that the phase delay is proportional to the range to target. This technique produces a stable range estimate from noisy signals. The advantage is higher accuracy obtained with relatively low optical power transmitted. The technique is useful for low cost, low power and low mass sensors suitable for tactical use. The signal coding technique allows applications including IFF and battlefield identification systems.

  13. Polarization-insensitive techniques for optical signal processing

    NASA Astrophysics Data System (ADS)

    Salem, Reza

    2006-12-01

    This thesis investigates polarization-insensitive methods for optical signal processing. Two signal processing techniques are studied: clock recovery based on two-photon absorption in silicon and demultiplexing based on cross-phase modulation in highly nonlinear fiber. The clock recovery system is tested at an 80 Gb/s data rate for both back-to-back and transmission experiments. The demultiplexer is tested at a 160 Gb/s data rate in a back-to-back experiment. We experimentally demonstrate methods for eliminating polarization dependence in both systems. Our experimental results are confirmed by theoretical and numerical analysis.

  14. In-line mixing states monitoring of suspensions using ultrasonic reflection technique.

    PubMed

    Zhan, Xiaobin; Yang, Yili; Liang, Jian; Zou, Dajun; Zhang, Jiaqi; Feng, Luyi; Shi, Tielin; Li, Xiwen

    2016-02-01

    Based on the measurement of echo signal changes caused by different concentration distributions in the mixing process, a simple ultrasonic reflection technique is proposed for in-line monitoring of the mixing states of suspensions in an agitated tank in this study. The relation between the echo signals and the concentration of suspensions is studied, and the mixing process of suspensions is tracked by in-line measurement of ultrasonic echo signals using two ultrasonic sensors. Through the analysis of echo signals over time, the mixing states of suspensions are obtained, and the homogeneity of suspensions is quantified. With the proposed technique, the effects of impeller diameter and agitation speed on the mixing process are studied, and the optimal agitation speed and the minimum mixing time to achieve the maximum homogeneity are acquired under different operating conditions and design parameters. The proposed technique is stable and feasible and shows great potential for in-line monitoring of mixing states of suspensions. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Adaptive frequency-difference matched field processing for high frequency source localization in a noisy shallow ocean.

    PubMed

    Worthmann, Brian M; Song, H C; Dowling, David R

    2017-01-01

    Remote source localization in the shallow ocean at frequencies significantly above 1 kHz is virtually impossible for conventional array signal processing techniques due to environmental mismatch. A recently proposed technique called frequency-difference matched field processing (Δf-MFP) [Worthmann, Song, and Dowling (2015). J. Acoust. Soc. Am. 138(6), 3549-3562] overcomes imperfect environmental knowledge by shifting the signal processing to frequencies below the signal's band through the use of a quadratic product of frequency-domain signal amplitudes called the autoproduct. This paper extends these prior Δf-MFP results to various adaptive MFP processors found in the literature, with particular emphasis on minimum variance distortionless response, multiple constraint method, multiple signal classification, and matched mode processing at signal-to-noise ratios (SNRs) from -20 to +20 dB. Using measurements from the 2011 Kauai Acoustic Communications Multiple University Research Initiative experiment, the localization performance of these techniques is analyzed and compared to Bartlett Δf-MFP. The results show that a source broadcasting a frequency sweep from 11.2 to 26.2 kHz through a 106 -m-deep sound channel over a distance of 3 km and recorded on a 16 element sparse vertical array can be localized using Δf-MFP techniques within average range and depth errors of 200 and 10 m, respectively, at SNRs down to 0 dB.

  16. Optical signal processing

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1978-01-01

    The article discusses several optical configurations used for signal processing. Electronic-to-optical transducers are outlined, noting fixed window transducers and moving window acousto-optic transducers. Folded spectrum techniques are considered, with reference to wideband RF signal analysis, fetal electroencephalogram analysis, engine vibration analysis, signal buried in noise, and spatial filtering. Various methods for radar signal processing are described, such as phased-array antennas, the optical processing of phased-array data, pulsed Doppler and FM radar systems, a multichannel one-dimensional optical correlator, correlations with long coded waveforms, and Doppler signal processing. Means for noncoherent optical signal processing are noted, including an optical correlator for speech recognition and a noncoherent optical correlator.

  17. Signal processing techniques for damage detection with piezoelectric wafer active sensors and embedded ultrasonic structural radar

    NASA Astrophysics Data System (ADS)

    Yu, Lingyu; Bao, Jingjing; Giurgiutiu, Victor

    2004-07-01

    Embedded ultrasonic structural radar (EUSR) algorithm is developed for using piezoelectric wafer active sensor (PWAS) array to detect defects within a large area of a thin-plate specimen. Signal processing techniques are used to extract the time of flight of the wave packages, and thereby to determine the location of the defects with the EUSR algorithm. In our research, the transient tone-burst wave propagation signals are generated and collected by the embedded PWAS. Then, with signal processing, the frequency contents of the signals and the time of flight of individual frequencies are determined. This paper starts with an introduction of embedded ultrasonic structural radar algorithm. Then we will describe the signal processing methods used to extract the time of flight of the wave packages. The signal processing methods being used include the wavelet denoising, the cross correlation, and Hilbert transform. Though hardware device can provide averaging function to eliminate the noise coming from the signal collection process, wavelet denoising is included to ensure better signal quality for the application in real severe environment. For better recognition of time of flight, cross correlation method is used. Hilbert transform is applied to the signals after cross correlation in order to extract the envelope of the signals. Signal processing and EUSR are both implemented by developing a graphical user-friendly interface program in LabView. We conclude with a description of our vision for applying EUSR signal analysis to structural health monitoring and embedded nondestructive evaluation. To this end, we envisage an automatic damage detection application utilizing embedded PWAS, EUSR, and advanced signal processing.

  18. Compressed sensing system considerations for ECG and EMG wireless biosensors.

    PubMed

    Dixon, Anna M R; Allstot, Emily G; Gangopadhyay, Daibashish; Allstot, David J

    2012-04-01

    Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.

  19. Fetal Electrocardiogram Extraction and Analysis Using Adaptive Noise Cancellation and Wavelet Transformation Techniques.

    PubMed

    Sutha, P; Jayanthi, V E

    2017-12-08

    Birth defect-related demise is mainly due to congenital heart defects. In the earlier stage of pregnancy, fetus problem can be identified by finding information about the fetus to avoid stillbirths. The gold standard used to monitor the health status of the fetus is by Cardiotachography(CTG), cannot be used for long durations and continuous monitoring. There is a need for continuous and long duration monitoring of fetal ECG signals to study the progressive health status of the fetus using portable devices. The non-invasive method of electrocardiogram recording is one of the best method used to diagnose fetal cardiac problem rather than the invasive methods.The monitoring of the fECG requires development of a miniaturized hardware and a efficient signal processing algorithms to extract the fECG embedded in the mother ECG. The paper discusses a prototype hardware developed to monitor and record the raw mother ECG signal containing the fECG and a signal processing algorithm to extract the fetal Electro Cardiogram signal. We have proposed two methods of signal processing, first is based on the Least Mean Square (LMS) Adaptive Noise Cancellation technique and the other method is based on the Wavelet Transformation technique. A prototype hardware was designed and developed to acquire the raw ECG signal containing the mother and fetal ECG and the signal processing techniques were used to eliminate the noises and extract the fetal ECG and the fetal Heart Rate Variability was studied. Both the methods were evaluated with the signal acquired from a fetal ECG simulator, from the Physionet database and that acquired from the subject. Both the methods are evaluated by finding heart rate and its variability, amplitude spectrum and mean value of extracted fetal ECG. Also the accuracy, sensitivity and positive predictive value are also determined for fetal QRS detection technique. In this paper adaptive filtering technique uses Sign-sign LMS algorithm and wavelet techniques with Daubechies wavelet, employed along with de noising techniques for the extraction of fetal Electrocardiogram.Both the methods are having good sensitivity and accuracy. In adaptive method the sensitivity is 96.83, accuracy 89.87, wavelet sensitivity is 95.97 and accuracy is 88.5. Additionally, time domain parameters from the plot of heart rate variability of mother and fetus are analyzed.

  20. Processing techniques for correlation of LDA and thermocouple signals

    NASA Astrophysics Data System (ADS)

    Nina, M. N. R.; Pita, G. P. A.

    1986-11-01

    A technique was developed to enable the evaluation of the correlation between velocity and temperature, with laser Doppler anemometer (LDA) as the source of velocity signals and fine wire thermocouple as that of flow temperature. The discontinuous nature of LDA signals requires a special technique for correlation, in particular when few seeding particles are present in the flow. The thermocouple signal was analog compensated in frequency and the effect of the value of time constant on the velocity temperature correlation was studied.

  1. Signal processing for distributed sensor concept: DISCO

    NASA Astrophysics Data System (ADS)

    Rafailov, Michael K.

    2007-04-01

    Distributed Sensor concept - DISCO proposed for multiplication of individual sensor capabilities through cooperative target engagement. DISCO relies on ability of signal processing software to format, to process and to transmit and receive sensor data and to exploit those data in signal synthesis process. Each sensor data is synchronized formatted, Signal-to-Noise Ration (SNR) enhanced and distributed inside of the sensor network. Signal processing technique for DISCO is Recursive Adaptive Frame Integration of Limited data - RAFIL technique that was initially proposed [1] as a way to improve the SNR, reduce data rate and mitigate FPA correlated noise of an individual sensor digital video-signal processing. In Distributed Sensor Concept RAFIL technique is used in segmented way, when constituencies of the technique are spatially and/or temporally separated between transmitters and receivers. Those constituencies include though not limited to two thresholds - one is tuned for optimum probability of detection, the other - to manage required false alarm rate, and limited frame integration placed somewhere between the thresholds as well as formatters, conventional integrators and more. RAFIL allows a non-linear integration that, along with SNR gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability [2]. DISCO architecture allows flexible optimization of SNR gain, data rates and noise suppression on sensor's side and limited integration, re-formatting and final threshold on node's side. DISCO with Recursive Adaptive Frame Integration of Limited data may have flexible architecture that allows segmenting the hardware and software to be best suitable for specific DISCO applications and sensing needs - whatever it is air-or-space platforms, ground terminals or integration of sensors network.

  2. Signal processing in ultrasound. [for diagnostic medicine

    NASA Technical Reports Server (NTRS)

    Le Croissette, D. H.; Gammell, P. M.

    1978-01-01

    Signal is the term used to denote the characteristic in the time or frequency domain of the probing energy of the system. Processing of this signal in diagnostic ultrasound occurs as the signal travels through the ultrasonic and electrical sections of the apparatus. The paper discusses current signal processing methods, postreception processing, display devices, real-time imaging, and quantitative measurements in noninvasive cardiology. The possibility of using deconvolution in a single transducer system is examined, and some future developments using digital techniques are outlined.

  3. Passive signal processing for a miniature Fabry-Perot interferometric sensor with a multimode laser-diode source

    NASA Astrophysics Data System (ADS)

    Ezbiri, A.; Tatam, R. P.

    1995-09-01

    A passive signal-processing technique for addressing a miniature low-finesse fiber Fabry-Perot interferometric sensor with a multimode laser diode is reported. Two modes of a multimode laser diode separated by 3 nm are used to obtain quadrature outputs from an \\similar 20 - mu m cavity. Wavelength-division demultiplexing combined with digital signal processing is used to recover the measurand-induced phase change. The technique is demonstrated for the measurement of vibration. The signal-to-noise ratio is \\similar 70 dB at 500 Hz for \\similar pi /2 rad displacement of the mirror, which results in a minimum detectable signal of \\similar 200 mu rad H z-1/2 . A quantitative discussion of miscalibration and systematic errors is presented.

  4. Digital television system design study

    NASA Technical Reports Server (NTRS)

    Huth, G. K.

    1976-01-01

    The use of digital techniques for transmission of pictorial data is discussed for multi-frame images (television). Video signals are processed in a manner which includes quantization and coding such that they are separable from the noise introduced into the channel. The performance of digital television systems is determined by the nature of the processing techniques (i.e., whether the video signal itself or, instead, something related to the video signal is quantized and coded) and to the quantization and coding schemes employed.

  5. Signal processing methods for MFE plasma diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J.V.; Casper, T.; Kane, R.

    1985-02-01

    The application of various signal processing methods to extract energy storage information from plasma diamagnetism sensors occurring during physics experiments on the Tandom Mirror Experiment-Upgrade (TMX-U) is discussed. We show how these processing techniques can be used to decrease the uncertainty in the corresponding sensor measurements. The algorithms suggested are implemented using SIG, an interactive signal processing package developed at LLNL.

  6. Low cost MATLAB-based pulse oximeter for deployment in research and development applications.

    PubMed

    Shokouhian, M; Morling, R C S; Kale, I

    2013-01-01

    Problems such as motion artifact and effects of ambient lights have forced developers to design different signal processing techniques and algorithms to increase the reliability and accuracy of the conventional pulse oximeter device. To evaluate the robustness of these techniques, they are applied either to recorded data or are implemented on chip to be applied to real-time data. Recorded data is the most common method of evaluating however it is not as reliable as real-time measurements. On the other hand, hardware implementation can be both expensive and time consuming. This paper presents a low cost MATLAB-based pulse oximeter that can be used for rapid evaluation of newly developed signal processing techniques and algorithms. Flexibility to apply different signal processing techniques, providing both processed and unprocessed data along with low implementation cost are the important features of this design which makes it ideal for research and development purposes, as well as commercial, hospital and healthcare application.

  7. Parmeterization of spectra

    NASA Technical Reports Server (NTRS)

    Cornish, C. R.

    1983-01-01

    Following reception and analog to digital conversion (A/D) conversion, atmospheric radar backscatter echoes need to be processed so as to obtain desired information about atmospheric processes and to eliminate or minimize contaminating contributions from other sources. Various signal processing techniques have been implemented at mesosphere-stratosphere-troposphere (MST) radar facilities to estimate parameters of interest from received spectra. Such estimation techniques need to be both accurate and sufficiently efficient to be within the capabilities of the particular data-processing system. The various techniques used to parameterize the spectra of received signals are reviewed herein. Noise estimation, electromagnetic interference, data smoothing, correlation, and the Doppler effect are among the specific points addressed.

  8. Advanced Signal Processing for High Temperatures Health Monitoring of Condensed Water Height in Steam Pipes

    NASA Technical Reports Server (NTRS)

    Lih, Shyh-Shiuh; Bar-Cohen, Yoseph; Lee, Hyeong Jae; Takano, Nobuyuki; Bao, Xiaoqi

    2013-01-01

    An advanced signal processing methodology is being developed to monitor the height of condensed water thru the wall of a steel pipe while operating at temperatures as high as 250deg. Using existing techniques, previous study indicated that, when the water height is low or there is disturbance in the environment, the predicted water height may not be accurate. In recent years, the use of the autocorrelation and envelope techniques in the signal processing has been demonstrated to be a very useful tool for practical applications. In this paper, various signal processing techniques including the auto correlation, Hilbert transform, and the Shannon Energy Envelope methods were studied and implemented to determine the water height in the steam pipe. The results have shown that the developed method provides a good capability for monitoring the height in the regular conditions. An alternative solution for shallow water or no water conditions based on a developed hybrid method based on Hilbert transform (HT) with a high pass filter and using the optimized windowing technique is suggested. Further development of the reported methods would provide a powerful tool for the identification of the disturbances of water height inside the pipe.

  9. Applied digital signal processing systems for vortex flowmeter with digital signal processing.

    PubMed

    Xu, Ke-Jun; Zhu, Zhi-Hai; Zhou, Yang; Wang, Xiao-Fen; Liu, San-Shan; Huang, Yun-Zhi; Chen, Zhi-Yuan

    2009-02-01

    The spectral analysis is combined with digital filter to process the vortex sensor signal for reducing the effect of disturbance at low frequency from pipe vibrations and increasing the turndown ratio. Using digital signal processing chip, two kinds of digital signal processing systems are developed to implement these algorithms. One is an integrative system, and the other is a separated system. A limiting amplifier is designed in the input analog condition circuit to adapt large amplitude variation of sensor signal. Some technique measures are taken to improve the accuracy of the output pulse, speed up the response time of the meter, and reduce the fluctuation of the output signal. The experimental results demonstrate the validity of the digital signal processing systems.

  10. Signal Detection Techniques for Diagnostic Monitoring of Space Shuttle Main Engine Turbomachinery

    NASA Technical Reports Server (NTRS)

    Coffin, Thomas; Jong, Jen-Yi

    1986-01-01

    An investigation to develop, implement, and evaluate signal analysis techniques for the detection and classification of incipient mechanical failures in turbomachinery is reviewed. A brief description of the Space Shuttle Main Engine (SSME) test/measurement program is presented. Signal analysis techniques available to describe dynamic measurement characteristics are reviewed. Time domain and spectral methods are described, and statistical classification in terms of moments is discussed. Several of these waveform analysis techniques have been implemented on a computer and applied to dynamc signals. A laboratory evaluation of the methods with respect to signal detection capability is described. A unique coherence function (the hyper-coherence) was developed through the course of this investigation, which appears promising as a diagnostic tool. This technique and several other non-linear methods of signal analysis are presented and illustrated by application. Software for application of these techniques has been installed on the signal processing system at the NASA/MSFC Systems Dynamics Laboratory.

  11. High efficiency processing for reduced amplitude zones detection in the HRECG signal

    NASA Astrophysics Data System (ADS)

    Dugarte, N.; Álvarez, A.; Balacco, J.; Mercado, G.; Gonzalez, A.; Dugarte, E.; Olivares, A.

    2016-04-01

    Summary - This article presents part of a more detailed research proposed in the medium to long term, with the intention of establishing a new philosophy of electrocardiogram surface analysis. This research aims to find indicators of cardiovascular disease in its early stage that may go unnoticed with conventional electrocardiography. This paper reports the development of a software processing which collect some existing techniques and incorporates novel methods for detection of reduced amplitude zones (RAZ) in high resolution electrocardiographic signal (HRECG).The algorithm consists of three stages, an efficient processing for QRS detection, averaging filter using correlation techniques and a step for RAZ detecting. Preliminary results show the efficiency of system and point to incorporation of techniques new using signal analysis with involving 12 leads.

  12. Signal processing for ION mobility spectrometers

    NASA Technical Reports Server (NTRS)

    Taylor, S.; Hinton, M.; Turner, R.

    1995-01-01

    Signal processing techniques for systems based upon Ion Mobility Spectrometry will be discussed in the light of 10 years of experience in the design of real-time IMS. Among the topics to be covered are compensation techniques for variations in the number density of the gas - the use of an internal standard (a reference peak) or pressure and temperature sensors. Sources of noise and methods for noise reduction will be discussed together with resolution limitations and the ability of deconvolution techniques to improve resolving power. The use of neural networks (either by themselves or as a component part of a processing system) will be reviewed.

  13. Signal Processing and Interpretation Using Multilevel Signal Abstractions.

    DTIC Science & Technology

    1986-06-01

    mappings expressed in the Fourier domain. Pre- viously proposed causal analysis techniques for diagnosis are based on the analysis of intermediate data ...can be processed either as individual one-dimensional waveforms or as multichannel data 26 I P- - . . . ." " ." h9. for source detection and direction...microphone data . The signal processing for both spectral analysis of microphone signals and direc- * tion determination of acoustic sources involves

  14. Analog integrated circuits design for processing physiological signals.

    PubMed

    Li, Yan; Poon, Carmen C Y; Zhang, Yuan-Ting

    2010-01-01

    Analog integrated circuits (ICs) designed for processing physiological signals are important building blocks of wearable and implantable medical devices used for health monitoring or restoring lost body functions. Due to the nature of physiological signals and the corresponding application scenarios, the ICs designed for these applications should have low power consumption, low cutoff frequency, and low input-referred noise. In this paper, techniques for designing the analog front-end circuits with these three characteristics will be reviewed, including subthreshold circuits, bulk-driven MOSFETs, floating gate MOSFETs, and log-domain circuits to reduce power consumption; methods for designing fully integrated low cutoff frequency circuits; as well as chopper stabilization (CHS) and other techniques that can be used to achieve a high signal-to-noise performance. Novel applications using these techniques will also be discussed.

  15. Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2010-01-01

    Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.

  16. [Multi-channel in vivo recording techniques: signal processing of action potentials and local field potentials].

    PubMed

    Xu, Jia-Min; Wang, Ce-Qun; Lin, Long-Nian

    2014-06-25

    Multi-channel in vivo recording techniques are used to record ensemble neuronal activity and local field potentials (LFP) simultaneously. One of the key points for the technique is how to process these two sets of recorded neural signals properly so that data accuracy can be assured. We intend to introduce data processing approaches for action potentials and LFP based on the original data collected through multi-channel recording system. Action potential signals are high-frequency signals, hence high sampling rate of 40 kHz is normally chosen for recording. Based on waveforms of extracellularly recorded action potentials, tetrode technology combining principal component analysis can be used to discriminate neuronal spiking signals from differently spatially distributed neurons, in order to obtain accurate single neuron spiking activity. LFPs are low-frequency signals (lower than 300 Hz), hence the sampling rate of 1 kHz is used for LFPs. Digital filtering is required for LFP analysis to isolate different frequency oscillations including theta oscillation (4-12 Hz), which is dominant in active exploration and rapid-eye-movement (REM) sleep, gamma oscillation (30-80 Hz), which is accompanied by theta oscillation during cognitive processing, and high frequency ripple oscillation (100-250 Hz) in awake immobility and slow wave sleep (SWS) state in rodent hippocampus. For the obtained signals, common data post-processing methods include inter-spike interval analysis, spike auto-correlation analysis, spike cross-correlation analysis, power spectral density analysis, and spectrogram analysis.

  17. Symmetric Phase Only Filtering for Improved DPIV Data Processing

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    2006-01-01

    The standard approach in Digital Particle Image Velocimetry (DPIV) data processing is to use Fast Fourier Transforms to obtain the cross-correlation of two single exposure subregions, where the location of the cross-correlation peak is representative of the most probable particle displacement across the subregion. This standard DPIV processing technique is analogous to Matched Spatial Filtering, a technique commonly used in optical correlators to perform the crosscorrelation operation. Phase only filtering is a well known variation of Matched Spatial Filtering, which when used to process DPIV image data yields correlation peaks which are narrower and up to an order of magnitude larger than those obtained using traditional DPIV processing. In addition to possessing desirable correlation plane features, phase only filters also provide superior performance in the presence of DC noise in the correlation subregion. When DPIV image subregions contaminated with surface flare light or high background noise levels are processed using phase only filters, the correlation peak pertaining only to the particle displacement is readily detected above any signal stemming from the DC objects. Tedious image masking or background image subtraction are not required. Both theoretical and experimental analyses of the signal-to-noise ratio performance of the filter functions are presented. In addition, a new Symmetric Phase Only Filtering (SPOF) technique, which is a variation on the traditional phase only filtering technique, is described and demonstrated. The SPOF technique exceeds the performance of the traditionally accepted phase only filtering techniques and is easily implemented in standard DPIV FFT based correlation processing with no significant computational performance penalty. An "Automatic" SPOF algorithm is presented which determines when the SPOF is able to provide better signal to noise results than traditional PIV processing. The SPOF based optical correlation processing approach is presented as a new paradigm for more robust cross-correlation processing of low signal-to-noise ratio DPIV image data."

  18. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  19. Ridding fMRI data of motion-related influences: Removal of signals with distinct spatial and physical bases in multiecho data.

    PubMed

    Power, Jonathan D; Plitt, Mark; Gotts, Stephen J; Kundu, Prantik; Voon, Valerie; Bandettini, Peter A; Martin, Alex

    2018-02-27

    "Functional connectivity" techniques are commonplace tools for studying brain organization. A critical element of these analyses is to distinguish variance due to neurobiological signals from variance due to nonneurobiological signals. Multiecho fMRI techniques are a promising means for making such distinctions based on signal decay properties. Here, we report that multiecho fMRI techniques enable excellent removal of certain kinds of artifactual variance, namely, spatially focal artifacts due to motion. By removing these artifacts, multiecho techniques reveal frequent, large-amplitude blood oxygen level-dependent (BOLD) signal changes present across all gray matter that are also linked to motion. These whole-brain BOLD signals could reflect widespread neural processes or other processes, such as alterations in blood partial pressure of carbon dioxide (pCO 2 ) due to ventilation changes. By acquiring multiecho data while monitoring breathing, we demonstrate that whole-brain BOLD signals in the resting state are often caused by changes in breathing that co-occur with head motion. These widespread respiratory fMRI signals cannot be isolated from neurobiological signals by multiecho techniques because they occur via the same BOLD mechanism. Respiratory signals must therefore be removed by some other technique to isolate neurobiological covariance in fMRI time series. Several methods for removing global artifacts are demonstrated and compared, and were found to yield fMRI time series essentially free of motion-related influences. These results identify two kinds of motion-associated fMRI variance, with different physical mechanisms and spatial profiles, each of which strongly and differentially influences functional connectivity patterns. Distance-dependent patterns in covariance are nearly entirely attributable to non-BOLD artifacts.

  20. A study of FM threshold extension techniques

    NASA Technical Reports Server (NTRS)

    Arndt, G. D.; Loch, F. J.

    1972-01-01

    The characteristics of three postdetection threshold extension techniques are evaluated with respect to the ability of such techniques to improve the performance of a phase lock loop demodulator. These techniques include impulse-noise elimination, signal correlation for the detection of impulse noise, and delta modulation signal processing. Experimental results from signal to noise ratio data and bit error rate data indicate that a 2- to 3-decibel threshold extension is readily achievable by using the various techniques. This threshold improvement is in addition to the threshold extension that is usually achieved through the use of a phase lock loop demodulator.

  1. Data Unfolding with Wiener-SVD Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, W.; Li, X.; Qian, X.

    Here, data unfolding is a common analysis technique used in HEP data analysis. Inspired by the deconvolution technique in the digital signal processing, a new unfolding technique based on the SVD technique and the well-known Wiener filter is introduced. The Wiener-SVD unfolding approach achieves the unfolding by maximizing the signal to noise ratios in the effective frequency domain given expectations of signal and noise and is free from regularization parameter. Through a couple examples, the pros and cons of the Wiener-SVD approach as well as the nature of the unfolded results are discussed.

  2. Data Unfolding with Wiener-SVD Method

    DOE PAGES

    Tang, W.; Li, X.; Qian, X.; ...

    2017-10-04

    Here, data unfolding is a common analysis technique used in HEP data analysis. Inspired by the deconvolution technique in the digital signal processing, a new unfolding technique based on the SVD technique and the well-known Wiener filter is introduced. The Wiener-SVD unfolding approach achieves the unfolding by maximizing the signal to noise ratios in the effective frequency domain given expectations of signal and noise and is free from regularization parameter. Through a couple examples, the pros and cons of the Wiener-SVD approach as well as the nature of the unfolded results are discussed.

  3. Reducing Noise by Repetition: Introduction to Signal Averaging

    ERIC Educational Resources Information Center

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  4. Methods for automatically analyzing humpback song units.

    PubMed

    Rickwood, Peter; Taylor, Andrew

    2008-03-01

    This paper presents mathematical techniques for automatically extracting and analyzing bioacoustic signals. Automatic techniques are described for isolation of target signals from background noise, extraction of features from target signals and unsupervised classification (clustering) of the target signals based on these features. The only user-provided inputs, other than raw sound, is an initial set of signal processing and control parameters. Of particular note is that the number of signal categories is determined automatically. The techniques, applied to hydrophone recordings of humpback whales (Megaptera novaeangliae), produce promising initial results, suggesting that they may be of use in automated analysis of not only humpbacks, but possibly also in other bioacoustic settings where automated analysis is desirable.

  5. Novel sonar signal processing tool using Shannon entropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quazi, A.H.

    1996-06-01

    Traditionally, conventional signal processing extracts information from sonar signals using amplitude, signal energy or frequency domain quantities obtained using spectral analysis techniques. The object is to investigate an alternate approach which is entirely different than that of traditional signal processing. This alternate approach is to utilize the Shannon entropy as a tool for the processing of sonar signals with emphasis on detection, classification, and localization leading to superior sonar system performance. Traditionally, sonar signals are processed coherently, semi-coherently, and incoherently, depending upon the a priori knowledge of the signals and noise. Here, the detection, classification, and localization technique will bemore » based on the concept of the entropy of the random process. Under a constant energy constraint, the entropy of a received process bearing finite number of sample points is maximum when hypothesis H{sub 0} (that the received process consists of noise alone) is true and decreases when correlated signal is present (H{sub 1}). Therefore, the strategy used for detection is: (I) Calculate the entropy of the received data; then, (II) compare the entropy with the maximum value; and, finally, (III) make decision: H{sub 1} is assumed if the difference is large compared to pre-assigned threshold and H{sub 0} is otherwise assumed. The test statistics will be different between entropies under H{sub 0} and H{sub 1}. Here, we shall show the simulated results for detecting stationary and non-stationary signals in noise, and results on detection of defects in a Plexiglas bar using an ultrasonic experiment conducted by Hughes. {copyright} {ital 1996 American Institute of Physics.}« less

  6. S-192 analysis: Conventional and special data processing techniques. [Michigan

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Morganstern, J.; Cicone, R.; Sarno, J.; Lambeck, P.; Malila, W.

    1975-01-01

    The author has identified the following significant results. Multispectral scanner data gathered over test sites in southeast Michigan were analyzed. This analysis showed the data to be somewhat deficient especially in terms of the limited signal range in most SDOs and also in regard to SDO-SDO misregistration. Further analysis showed that the scan line straightening algorithm increased the misregistration of the data. Data were processed using the conic format. The effects of such misregistration on classification accuracy was analyzed via simulation and found to be significant. Results of employing conventional as well as special, unresolved object, processing techniques were disappointing due, at least in part, to the limited signal range and noise content of the data. Application of a second class of special processing techniques, signature extension techniques, yielded better results. Two of the more basic signature extension techniques seemed to be useful in spite of the difficulties.

  7. AOD furnace splash soft-sensor in the smelting process based on improved BP neural network

    NASA Astrophysics Data System (ADS)

    Ma, Haitao; Wang, Shanshan; Wu, Libin; Yu, Ying

    2017-11-01

    In view of argon oxygen refining low carbon ferrochrome production process, in the splash of smelting process as the research object, based on splash mechanism analysis in the smelting process , using multi-sensor information fusion and BP neural network modeling techniques is proposed in this paper, using the vibration signal, the audio signal and the flame image signal in the furnace as the characteristic signal of splash, the vibration signal, the audio signal and the flame image signal in the furnace integration and modeling, and reconstruct splash signal, realize the splash soft measurement in the smelting process, the simulation results show that the method can accurately forecast splash type in the smelting process, provide a new method of measurement for forecast splash in the smelting process, provide more accurate information to control splash.

  8. A quantitative image cytometry technique for time series or population analyses of signaling networks.

    PubMed

    Ozaki, Yu-ichi; Uda, Shinsuke; Saito, Takeshi H; Chung, Jaehoon; Kubota, Hiroyuki; Kuroda, Shinya

    2010-04-01

    Modeling of cellular functions on the basis of experimental observation is increasingly common in the field of cellular signaling. However, such modeling requires a large amount of quantitative data of signaling events with high spatio-temporal resolution. A novel technique which allows us to obtain such data is needed for systems biology of cellular signaling. We developed a fully automatable assay technique, termed quantitative image cytometry (QIC), which integrates a quantitative immunostaining technique and a high precision image-processing algorithm for cell identification. With the aid of an automated sample preparation system, this device can quantify protein expression, phosphorylation and localization with subcellular resolution at one-minute intervals. The signaling activities quantified by the assay system showed good correlation with, as well as comparable reproducibility to, western blot analysis. Taking advantage of the high spatio-temporal resolution, we investigated the signaling dynamics of the ERK pathway in PC12 cells. The QIC technique appears as a highly quantitative and versatile technique, which can be a convenient replacement for the most conventional techniques including western blot, flow cytometry and live cell imaging. Thus, the QIC technique can be a powerful tool for investigating the systems biology of cellular signaling.

  9. Study of photon correlation techniques for processing of laser velocimeter signals

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1977-01-01

    The objective was to provide the theory and a system design for a new type of photon counting processor for low level dual scatter laser velocimeter (LV) signals which would be capable of both the first order measurements of mean flow and turbulence intensity and also the second order time statistics: cross correlation auto correlation, and related spectra. A general Poisson process model for low level LV signals and noise which is valid from the photon-resolved regime all the way to the limiting case of nonstationary Gaussian noise was used. Computer simulation algorithms and higher order statistical moment analysis of Poisson processes were derived and applied to the analysis of photon correlation techniques. A system design using a unique dual correlate and subtract frequency discriminator technique is postulated and analyzed. Expectation analysis indicates that the objective measurements are feasible.

  10. Hybrid photonic signal processing

    NASA Astrophysics Data System (ADS)

    Ghauri, Farzan Naseer

    This thesis proposes research of novel hybrid photonic signal processing systems in the areas of optical communications, test and measurement, RF signal processing and extreme environment optical sensors. It will be shown that use of innovative hybrid techniques allows design of photonic signal processing systems with superior performance parameters and enhanced capabilities. These applications can be divided into domains of analog-digital hybrid signal processing applications and free-space---fiber-coupled hybrid optical sensors. The analog-digital hybrid signal processing applications include a high-performance analog-digital hybrid MEMS variable optical attenuator that can simultaneously provide high dynamic range as well as high resolution attenuation controls; an analog-digital hybrid MEMS beam profiler that allows high-power watt-level laser beam profiling and also provides both submicron-level high resolution and wide area profiling coverage; and all optical transversal RF filters that operate on the principle of broadband optical spectral control using MEMS and/or Acousto-Optic tunable Filters (AOTF) devices which can provide continuous, digital or hybrid signal time delay and weight selection. The hybrid optical sensors presented in the thesis are extreme environment pressure sensors and dual temperature-pressure sensors. The sensors employ hybrid free-space and fiber-coupled techniques for remotely monitoring a system under simultaneous extremely high temperatures and pressures.

  11. A method for compression of intra-cortically-recorded neural signals dedicated to implantable brain-machine interfaces.

    PubMed

    Shaeri, Mohammad Ali; Sodagar, Amir M

    2015-05-01

    This paper proposes an efficient data compression technique dedicated to implantable intra-cortical neural recording devices. The proposed technique benefits from processing neural signals in the Discrete Haar Wavelet Transform space, a new spike extraction approach, and a novel data framing scheme to telemeter the recorded neural information to the outside world. Based on the proposed technique, a 64-channel neural signal processor was designed and prototyped as a part of a wireless implantable extra-cellular neural recording microsystem. Designed in a 0.13- μ m standard CMOS process, the 64-channel neural signal processor reported in this paper occupies ∼ 0.206 mm(2) of silicon area, and consumes 94.18 μW when operating under a 1.2-V supply voltage at a master clock frequency of 1.28 MHz.

  12. Liquid argon TPC signal formation, signal processing and reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Baller, B.

    2017-07-01

    This document describes a reconstruction chain that was developed for the ArgoNeuT and MicroBooNE experiments at Fermilab. These experiments study accelerator neutrino interactions that occur in a Liquid Argon Time Projection Chamber. Reconstructing the properties of particles produced in these interactions benefits from the knowledge of the micro-physics processes that affect the creation and transport of ionization electrons to the readout system. A wire signal deconvolution technique was developed to convert wire signals to a standard form for hit reconstruction, to remove artifacts in the electronics chain and to remove coherent noise. A unique clustering algorithm reconstructs line-like trajectories and vertices in two dimensions which are then matched to create of 3D objects. These techniques and algorithms are available to all experiments that use the LArSoft suite of software.

  13. Method and Apparatus for Reducing Noise from Near Ocean Surface Sources

    DTIC Science & Technology

    2001-10-01

    reducing the acoustic noise from near-surface 4 sources using an array processing technique that utilizes 5 Multiple Signal Classification ( MUSIC ...sources without 13 degrading the signal level and quality of the TOI. The present 14 invention utilizes a unique application of the MUSIC beamforming...specific algorithm that utilizes a 5 MUSIC technique and estimates the direction of arrival (DOA) of 6 the acoustic signal signals and generates output

  14. Applications of digital processing for noise removal from plasma diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, R.J.; Candy, J.V.; Casper, T.A.

    1985-11-11

    The use of digital signal techniques for removal of noise components present in plasma diagnostic signals is discussed, particularly with reference to diamagnetic loop signals. These signals contain noise due to power supply ripple in addition to plasma characteristics. The application of noise canceling techniques, such as adaptive noise canceling and model-based estimation, will be discussed. The use of computer codes such as SIG is described. 19 refs., 5 figs.

  15. A New High-Resolution Direction Finding Architecture Using Photonics and Neural Network Signal Processing for Miniature Air Vehicle Applications

    DTIC Science & Technology

    2015-09-01

    3 4. Probability of Intercept .................................................................3 5. Superresolution ...intercept. This alignment gives the shortest mean-time-to-intercept and can be less than one second. 4 5. Superresolution For single signals...at each antenna element. For multiple signals, superresolution DF techniques are often used. These techniques can be broken down into beamforming

  16. Programmable rate modem utilizing digital signal processing techniques

    NASA Technical Reports Server (NTRS)

    Naveh, Arad

    1992-01-01

    The need for a Programmable Rate Digital Satellite Modem capable of supporting both burst and continuous transmission modes with either Binary Phase Shift Keying (BPSK) or Quadrature Phase Shift Keying (QPSK) modulation is discussed. The preferred implementation technique is an all digital one which utilizes as much digital signal processing (DSP) as possible. The design trade-offs in each portion of the modulator and demodulator subsystem are outlined.

  17. Picosecond time-resolved photoluminescence using picosecond excitation correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Johnson, M. B.; McGill, T. C.; Hunter, A. T.

    1988-03-01

    We present a study of the temporal decay of photoluminescence (PL) as detected by picosecond excitation correlation spectroscopy (PECS). We analyze the correlation signal that is obtained from two simple models; one where radiative recombination dominates, the other where trapping processes dominate. It is found that radiative recombination alone does not lead to a correlation signal. Parallel trapping type processes are found to be required to see a signal. To illustrate this technique, we examine the temporal decay of the PL signal for In-alloyed, semi-insulating GaAs substrates. We find that the PL signal indicates a carrier lifetime of roughly 100 ps, for excitation densities of 1×1016-5×1017 cm-3. PECS is shown to be an easy technique to measure the ultrafast temporal behavior of PL processes because it requires no ultrafast photon detection. It is particularly well suited to measuring carrier lifetimes.

  18. Research on motor rotational speed measurement in regenerative braking system of electric vehicle

    NASA Astrophysics Data System (ADS)

    Pan, Chaofeng; Chen, Liao; Chen, Long; Jiang, Haobin; Li, Zhongxing; Wang, Shaohua

    2016-01-01

    Rotational speed signals acquisition and processing techniques are widely used in rotational machinery. In order to realized precise and real-time control of motor drive and regenerative braking process, rotational speed measurement techniques are needed in electric vehicles. Obtaining accurate motor rotational speed signal will contribute to the regenerative braking force control steadily and realized higher energy recovery rate. This paper aims to develop a method that provides instantaneous speed information in the form of motor rotation. It addresses principles of motor rotational speed measurement in the regenerative braking systems of electric vehicle firstly. The paper then presents ideal and actual Hall position sensor signals characteristics, the relation between the motor rotational speed and the Hall position sensor signals is revealed. Finally, Hall position sensor signals conditioning and processing circuit and program for motor rotational speed measurement have been carried out based on measurement error analysis.

  19. Trends in non-stationary signal processing techniques applied to vibration analysis of wind turbine drive train - A contemporary survey

    NASA Astrophysics Data System (ADS)

    Uma Maheswari, R.; Umamaheswari, R.

    2017-02-01

    Condition Monitoring System (CMS) substantiates potential economic benefits and enables prognostic maintenance in wind turbine-generator failure prevention. Vibration Monitoring and Analysis is a powerful tool in drive train CMS, which enables the early detection of impending failure/damage. In variable speed drives such as wind turbine-generator drive trains, the vibration signal acquired is of non-stationary and non-linear. The traditional stationary signal processing techniques are inefficient to diagnose the machine faults in time varying conditions. The current research trend in CMS for drive-train focuses on developing/improving non-linear, non-stationary feature extraction and fault classification algorithms to improve fault detection/prediction sensitivity and selectivity and thereby reducing the misdetection and false alarm rates. In literature, review of stationary signal processing algorithms employed in vibration analysis is done at great extent. In this paper, an attempt is made to review the recent research advances in non-linear non-stationary signal processing algorithms particularly suited for variable speed wind turbines.

  20. Ultrafast chirped optical waveform recorder using a time microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Corey Vincent

    2015-04-21

    A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.

  1. Enhanced automatic artifact detection based on independent component analysis and Renyi's entropy.

    PubMed

    Mammone, Nadia; Morabito, Francesco Carlo

    2008-09-01

    Artifacts are disturbances that may occur during signal acquisition and may affect their processing. The aim of this paper is to propose a technique for automatically detecting artifacts from the electroencephalographic (EEG) recordings. In particular, a technique based on both Independent Component Analysis (ICA) to extract artifactual signals and on Renyi's entropy to automatically detect them is presented. This technique is compared to the widely known approach based on ICA and the joint use of kurtosis and Shannon's entropy. The novel processing technique is shown to detect on average 92.6% of the artifactual signals against the average 68.7% of the previous technique on the studied available database. Moreover, Renyi's entropy is shown to be able to detect muscle and very low frequency activity as well as to discriminate them from other kinds of artifacts. In order to achieve an efficient rejection of the artifacts while minimizing the information loss, future efforts will be devoted to the improvement of blind artifact separation from EEG in order to ensure a very efficient isolation of the artifactual activity from any signals deriving from other brain tasks.

  2. Digital signal processing techniques for pitch shifting and time scaling of audio signals

    NASA Astrophysics Data System (ADS)

    Buś, Szymon; Jedrzejewski, Konrad

    2016-09-01

    In this paper, we present the techniques used for modifying the spectral content (pitch shifting) and for changing the time duration (time scaling) of an audio signal. A short introduction gives a necessary background for understanding the discussed issues and contains explanations of the terms used in the paper. In subsequent sections we present three different techniques appropriate both for pitch shifting and for time scaling. These techniques use three different time-frequency representations of a signal, namely short-time Fourier transform (STFT), continuous wavelet transform (CWT) and constant-Q transform (CQT). The results of simulation studies devoted to comparison of the properties of these methods are presented and discussed in the paper.

  3. Analysis of cutting force signals by wavelet packet transform for surface roughness monitoring in CNC turning

    NASA Astrophysics Data System (ADS)

    García Plaza, E.; Núñez López, P. J.

    2018-01-01

    On-line monitoring of surface finish in machining processes has proven to be a substantial advancement over traditional post-process quality control techniques by reducing inspection times and costs and by avoiding the manufacture of defective products. This study applied techniques for processing cutting force signals based on the wavelet packet transform (WPT) method for the monitoring of surface finish in computer numerical control (CNC) turning operations. The behaviour of 40 mother wavelets was analysed using three techniques: global packet analysis (G-WPT), and the application of two packet reduction criteria: maximum energy (E-WPT) and maximum entropy (SE-WPT). The optimum signal decomposition level (Lj) was determined to eliminate noise and to obtain information correlated to surface finish. The results obtained with the G-WPT method provided an in-depth analysis of cutting force signals, and frequency ranges and signal characteristics were correlated to surface finish with excellent results in the accuracy and reliability of the predictive models. The radial and tangential cutting force components at low frequency provided most of the information for the monitoring of surface finish. The E-WPT and SE-WPT packet reduction criteria substantially reduced signal processing time, but at the expense of discarding packets with relevant information, which impoverished the results. The G-WPT method was observed to be an ideal procedure for processing cutting force signals applied to the real-time monitoring of surface finish, and was estimated to be highly accurate and reliable at a low analytical-computational cost.

  4. High Frequency Direction Finding Using Structurally Integrated Antennas on a Large Airborne Platform

    DTIC Science & Technology

    2011-03-24

    signal processing techniques, including superresolution techniques, as a possible way to extend the airborne DF capability to the HF band. Structurally...electrically or mechan- ically scanned beams has been diminished by array processing techniques [4]. The implementation of superresolution algorithms

  5. Eddy current technique for predicting burst pressure

    DOEpatents

    Petri, Mark C.; Kupperman, David S.; Morman, James A.; Reifman, Jaques; Wei, Thomas Y. C.

    2003-01-01

    A signal processing technique which correlates eddy current inspection data from a tube having a critical tubing defect with a range of predicted burst pressures for the tube is provided. The method can directly correlate the raw eddy current inspection data representing the critical tubing defect with the range of burst pressures using a regression technique, preferably an artificial neural network. Alternatively, the technique deconvolves the raw eddy current inspection data into a set of undistorted signals, each of which represents a separate defect of the tube. The undistorted defect signal which represents the critical tubing defect is related to a range of burst pressures utilizing a regression technique.

  6. Artifact Noise Removal Techniques on Seismocardiogram Using Two Tri-Axial Accelerometers

    PubMed Central

    Luu, Loc; Dinh, Anh

    2018-01-01

    The aim of this study is on the investigation of motion noise removal techniques using two-accelerometer sensor system and various placements of the sensors on gentle movement and walking of the patients. A Wi-Fi based data acquisition system and a framework on Matlab are developed to collect and process data while the subjects are in motion. The tests include eight volunteers who have no record of heart disease. The walking and running data on the subjects are analyzed to find the minimal-noise bandwidth of the SCG signal. This bandwidth is used to design filters in the motion noise removal techniques and peak signal detection. There are two main techniques of combining signals from the two sensors to mitigate the motion artifact: analog processing and digital processing. The analog processing comprises analog circuits performing adding or subtracting functions and bandpass filter to remove artifact noises before entering the data acquisition system. The digital processing processes all the data using combinations of total acceleration and z-axis only acceleration. The two techniques are tested on three placements of accelerometer sensors including horizontal, vertical, and diagonal on gentle motion and walking. In general, the total acceleration and z-axis acceleration are the best techniques to deal with gentle motion on all sensor placements which improve average systolic signal-noise-ratio (SNR) around 2 times and average diastolic SNR around 3 times comparing to traditional methods using only one accelerometer. With walking motion, ADDER and z-axis acceleration are the best techniques on all placements of the sensors on the body which enhance about 7 times of average systolic SNR and about 11 times of average diastolic SNR comparing to only one accelerometer method. Among the sensor placements, the performance of horizontal placement of the sensors is outstanding comparing with other positions on all motions. PMID:29614821

  7. Preface to the special issue on "Integrated Microwave Photonic Signal Processing"

    NASA Astrophysics Data System (ADS)

    Azaña, José; Yao, Jianping

    2016-08-01

    As Guest Editors, we are pleased to introduce this special issue on ;Integrated Microwave Photonic Signal Processing; published by the Elsevier journal Optics Communications. Microwave photonics is a field of growing importance from both scientific and practical application perspectives. The field of microwave photonics is devoted to the study, development and application of optics-based techniques and technologies aimed to the generation, processing, control, characterization and/or distribution of microwave signals, including signals well into the millimeter-wave frequency range. The use of photonic technologies for these microwave applications translates into a number of key advantages, such as the possibility of dealing with high-frequency, wide bandwidth signals with minimal losses and reduced electromagnetic interferences, and the potential for enhanced reconfigurability. The central purpose of this special issue is to provide an overview of the state of the art of generation, processing and characterization technologies for high-frequency microwave signals. It is now widely accepted that the practical success of microwave photonics at a large scale will essentially depend on the realization of high-performance microwave-photonic signal-processing engines in compact and integrated formats, preferably on a chip. Thus, the focus of the issue is on techniques implemented using integrated photonic technologies, with the goal of providing an update of the most recent advances toward realization of this vision.

  8. Fourier analysis and signal processing by use of the Moebius inversion formula

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.

    1990-01-01

    A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.

  9. Application of advanced signal processing techniques to the rectification and registration of spaceborne imagery. [technology transfer, data transmission

    NASA Technical Reports Server (NTRS)

    Caron, R. H.; Rifman, S. S.; Simon, K. W.

    1974-01-01

    The development of an ERTS/MSS image processing system responsive to the needs of the user community is discussed. An overview of the TRW ERTS/MSS processor is presented, followed by a more detailed discussion of image processing functions satisfied by the system. The particular functions chosen for discussion are evolved from advanced signal processing techniques rooted in the areas of communication and control. These examples show how classical aerospace technology can be transferred to solve the more contemporary problems confronting the users of spaceborne imagery.

  10. Dysphagia Screening: Contributions of Cervical Auscultation Signals and Modern Signal-Processing Techniques

    PubMed Central

    Dudik, Joshua M.; Coyle, James L.

    2015-01-01

    Cervical auscultation is the recording of sounds and vibrations caused by the human body from the throat during swallowing. While traditionally done by a trained clinician with a stethoscope, much work has been put towards developing more sensitive and clinically useful methods to characterize the data obtained with this technique. The eventual goal of the field is to improve the effectiveness of screening algorithms designed to predict the risk that swallowing disorders pose to individual patients’ health and safety. This paper provides an overview of these signal processing techniques and summarizes recent advances made with digital transducers in hopes of organizing the highly varied research on cervical auscultation. It investigates where on the body these transducers are placed in order to record a signal as well as the collection of analog and digital filtering techniques used to further improve the signal quality. It also presents the wide array of methods and features used to characterize these signals, ranging from simply counting the number of swallows that occur over a period of time to calculating various descriptive features in the time, frequency, and phase space domains. Finally, this paper presents the algorithms that have been used to classify this data into ‘normal’ and ‘abnormal’ categories. Both linear as well as non-linear techniques are presented in this regard. PMID:26213659

  11. Burst design and signal processing for the speed of sound measurement of fluids with the pulse-echo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubberke, Frithjof H.; Baumhögger, Elmar; Vrabec, Jadran, E-mail: jadran.vrabec@upb.de

    2015-05-15

    The pulse-echo technique determines the propagation time of acoustic wave bursts in a fluid over a known propagation distance. It is limited by the signal quality of the received echoes of the acoustic wave bursts, which degrades with decreasing density of the fluid due to acoustic impedance and attenuation effects. Signal sampling is significantly improved in this work by burst design and signal processing such that a wider range of thermodynamic states can be investigated. Applying a Fourier transformation based digital filter on acoustic wave signals increases their signal-to-noise ratio and enhances their time and amplitude resolutions, improving the overallmore » measurement accuracy. In addition, burst design leads to technical advantages for determining the propagation time due to the associated conditioning of the echo. It is shown that the according operation procedure enlarges the measuring range of the pulse-echo technique for supercritical argon and nitrogen at 300 K down to 5 MPa, where it was limited to around 20 MPa before.« less

  12. Modern Techniques in Acoustical Signal and Image Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J V

    2002-04-04

    Acoustical signal processing problems can lead to some complex and intricate techniques to extract the desired information from noisy, sometimes inadequate, measurements. The challenge is to formulate a meaningful strategy that is aimed at performing the processing required even in the face of uncertainties. This strategy can be as simple as a transformation of the measured data to another domain for analysis or as complex as embedding a full-scale propagation model into the processor. The aims of both approaches are the same--to extract the desired information and reject the extraneous, that is, develop a signal processing scheme to achieve thismore » goal. In this paper, we briefly discuss this underlying philosophy from a ''bottom-up'' approach enabling the problem to dictate the solution rather than visa-versa.« less

  13. Book: Marine Bioacoustic Signal Processing and Analysis

    DTIC Science & Technology

    2011-09-30

    physicists , and mathematicians . However, more and more biologists and psychologists are starting to use advanced signal processing techniques and...Book: Marine Bioacoustic Signal Processing and Analysis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT ...chapters than it should be, since the project must be finished by Dec. 31. I have started setting aside 2 hours of uninterrupted per workday to work

  14. Digital signal processing in microwave radiometers

    NASA Technical Reports Server (NTRS)

    Lawrence, R. W.; Stanley, W. D.; Harrington, R. F.

    1980-01-01

    A microprocessor based digital signal processing unit has been proposed to replace analog sections of a microwave radiometer. A brief introduction to the radiometer system involved and a description of problems encountered in the use of digital techniques in radiometer design are discussed. An analysis of the digital signal processor as part of the radiometer is then presented.

  15. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method.

    PubMed

    Batres-Mendoza, Patricia; Ibarra-Manzano, Mario A; Guerra-Hernandez, Erick I; Almanza-Ojeda, Dora L; Montoro-Sanjose, Carlos R; Romero-Troncoso, Rene J; Rostro-Gonzalez, Horacio

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications.

  16. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method

    PubMed Central

    Batres-Mendoza, Patricia; Guerra-Hernandez, Erick I.; Almanza-Ojeda, Dora L.; Montoro-Sanjose, Carlos R.

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications. PMID:29348744

  17. Signal quality enhancement using higher order wavelets for ultrasonic TOFD signals from austenitic stainless steel welds.

    PubMed

    Praveen, Angam; Vijayarekha, K; Abraham, Saju T; Venkatraman, B

    2013-09-01

    Time of flight diffraction (TOFD) technique is a well-developed ultrasonic non-destructive testing (NDT) method and has been applied successfully for accurate sizing of defects in metallic materials. This technique was developed in early 1970s as a means for accurate sizing and positioning of cracks in nuclear components became very popular in the late 1990s and is today being widely used in various industries for weld inspection. One of the main advantages of TOFD is that, apart from fast technique, it provides higher probability of detection for linear defects. Since TOFD is based on diffraction of sound waves from the extremities of the defect compared to reflection from planar faces as in pulse echo and phased array, the resultant signal would be quite weak and signal to noise ratio (SNR) low. In many cases the defect signal is submerged in this noise making it difficult for detection, positioning and sizing. Several signal processing methods such as digital filtering, Split Spectrum Processing (SSP), Hilbert Transform and Correlation techniques have been developed in order to suppress unwanted noise and enhance the quality of the defect signal which can thus be used for characterization of defects and the material. Wavelet Transform based thresholding techniques have been applied largely for de-noising of ultrasonic signals. However in this paper, higher order wavelets are used for analyzing the de-noising performance for TOFD signals obtained from Austenitic Stainless Steel welds. It is observed that higher order wavelets give greater SNR improvement compared to the lower order wavelets. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2006-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  19. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  20. Controlling basins of attraction in a neural network-based telemetry monitor

    NASA Technical Reports Server (NTRS)

    Bell, Benjamin; Eilbert, James L.

    1988-01-01

    The size of the basins of attraction around fixed points in recurrent neural nets (NNs) can be modified by a training process. Controlling these attractive regions by presenting training data with various amount of noise added to the prototype signal vectors is discussed. Application of this technique to signal processing results in a classification system whose sensitivity can be controlled. This new technique is applied to the classification of temporal sequences in telemetry data.

  1. Experimental quantification of the true efficiency of carbon nanotube thin-film thermophones.

    PubMed

    Bouman, Troy M; Barnard, Andrew R; Asgarisabet, Mahsa

    2016-03-01

    Carbon nanotube thermophones can create acoustic waves from 1 Hz to 100 kHz. The thermoacoustic effect that allows for this non-vibrating sound source is naturally inefficient. Prior efforts have not explored their true efficiency (i.e., the ratio of the total acoustic power to the electrical input power). All previous works have used the ratio of sound pressure to input electrical power. A method for true power efficiency measurement is shown using a fully anechoic technique. True efficiency data are presented for three different drive signal processing techniques: standard alternating current (AC), direct current added to alternating current (DCAC), and amplitude modulation of an alternating current (AMAC) signal. These signal processing techniques are needed to limit the frequency doubling non-linear effects inherent to carbon nanotube thermophones. Each type of processing affects the true efficiency differently. Using a 72 W(rms) input signal, the measured efficiency ranges were 4.3 × 10(-6) - 319 × 10(-6), 1.7 × 10(-6) - 308 × 10(-6), and 1.2 × 10(-6) - 228 × 10(-6)% for AC, DCAC, and AMAC, respectively. These data were measured in the frequency range of 100 Hz to 10 kHz. In addition, the effects of these processing techniques relative to sound quality are presented in terms of total harmonic distortion.

  2. Develop Advanced Nonlinear Signal Analysis Topographical Mapping System

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1997-01-01

    During the development of the SSME, a hierarchy of advanced signal analysis techniques for mechanical signature analysis has been developed by NASA and AI Signal Research Inc. (ASRI) to improve the safety and reliability for Space Shuttle operations. These techniques can process and identify intelligent information hidden in a measured signal which is often unidentifiable using conventional signal analysis methods. Currently, due to the highly interactive processing requirements and the volume of dynamic data involved, detailed diagnostic analysis is being performed manually which requires immense man-hours with extensive human interface. To overcome this manual process, NASA implemented this program to develop an Advanced nonlinear signal Analysis Topographical Mapping System (ATMS) to provide automatic/unsupervised engine diagnostic capabilities. The ATMS will utilize a rule-based Clips expert system to supervise a hierarchy of diagnostic signature analysis techniques in the Advanced Signal Analysis Library (ASAL). ASAL will perform automatic signal processing, archiving, and anomaly detection/identification tasks in order to provide an intelligent and fully automated engine diagnostic capability. The ATMS has been successfully developed under this contract. In summary, the program objectives to design, develop, test and conduct performance evaluation for an automated engine diagnostic system have been successfully achieved. Software implementation of the entire ATMS system on MSFC's OISPS computer has been completed. The significance of the ATMS developed under this program is attributed to the fully automated coherence analysis capability for anomaly detection and identification which can greatly enhance the power and reliability of engine diagnostic evaluation. The results have demonstrated that ATMS can significantly save time and man-hours in performing engine test/flight data analysis and performance evaluation of large volumes of dynamic test data.

  3. Quantitative Aspects of Single Molecule Microscopy

    PubMed Central

    Ober, Raimund J.; Tahmasbi, Amir; Ram, Sripad; Lin, Zhiping; Ward, E. Sally

    2015-01-01

    Single molecule microscopy is a relatively new optical microscopy technique that allows the detection of individual molecules such as proteins in a cellular context. This technique has generated significant interest among biologists, biophysicists and biochemists, as it holds the promise to provide novel insights into subcellular processes and structures that otherwise cannot be gained through traditional experimental approaches. Single molecule experiments place stringent demands on experimental and algorithmic tools due to the low signal levels and the presence of significant extraneous noise sources. Consequently, this has necessitated the use of advanced statistical signal and image processing techniques for the design and analysis of single molecule experiments. In this tutorial paper, we provide an overview of single molecule microscopy from early works to current applications and challenges. Specific emphasis will be on the quantitative aspects of this imaging modality, in particular single molecule localization and resolvability, which will be discussed from an information theoretic perspective. We review the stochastic framework for image formation, different types of estimation techniques and expressions for the Fisher information matrix. We also discuss several open problems in the field that demand highly non-trivial signal processing algorithms. PMID:26167102

  4. Frequency domain laser velocimeter signal processor: A new signal processing scheme

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Clemmons, James I., Jr.

    1987-01-01

    A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a smart instrument that is able to configure itself, based on the characteristics of the input signals, for optimum measurement accuracy. The signal processor is composed of a high-speed 2-bit transient recorder for signal capture and a combination of adaptive digital filters with energy and/or zero crossing detection signal processing. The system is designed to accept signals with frequencies up to 100 MHz with standard deviations up to 20 percent of the average signal frequency. Results from comparative simulation studies indicate measurement accuracies 2.5 times better than with a high-speed burst counter, from signals with as few as 150 photons per burst.

  5. Non Destructive Analysis of Fsw Welds using Ultrasonic Signal Analysis

    NASA Astrophysics Data System (ADS)

    Pavan Kumar, T.; Prabhakar Reddy, P.

    2017-08-01

    Friction Stir Welding is an evolving metal joining technique and is mostly used in joining materials which cannot be easily joined by other available welding techniques. It is a technique which can be used for welding dissimilar materials also. The strength of the weld joint is determined by the way in which these material are mixing with each other, since we are not using any filler material for the welding process the intermixing has a significant importance. The complication with the friction stir welding process is that there are many process parameters which effect this intermixing process such as tool geometry, rotating speed of the tool, transverse speed etc., In this study an attempt is made to compare the material flow and weld quality of various weldments by changing the parameters. Ultrasonic signal Analysis is used to characterize the microstructure of the weldments. use of ultrasonic waves is a non destructive, accurate and fast way of characterization of microstructure. In this method the relationship between the ultrasonic measured parameters and microstructures are evaluated using background echo and backscattered signal process techniques. The ultrasonic velocity and attenuation measurements are dependent on the elastic modulus and any change in the microstructure is reflected in the ultrasonic velocity. An insight into material flow is essential to determine the quality of the weld. Hence an attempt is made in this study to know the relationship between tool geometry and the pattern of material flow and resulting weld quality the experiments are conducted to weld dissimilar aluminum alloys and the weldments are characterized using and ultra Sonic signal processing. Characterization is also done using Scanning Electron Microscopy. It is observed that there is a good correlation between the ultrasonic signal processing results and Scanning Electron Microscopy on the observed precipitates. Tensile tests and hardness tests are conducted on the weldments and compared for determining the weld quality.

  6. Ultra high speed image processing techniques. [electronic packaging techniques

    NASA Technical Reports Server (NTRS)

    Anthony, T.; Hoeschele, D. F.; Connery, R.; Ehland, J.; Billings, J.

    1981-01-01

    Packaging techniques for ultra high speed image processing were developed. These techniques involve the development of a signal feedthrough technique through LSI/VLSI sapphire substrates. This allows the stacking of LSI/VLSI circuit substrates in a 3 dimensional package with greatly reduced length of interconnecting lines between the LSI/VLSI circuits. The reduced parasitic capacitances results in higher LSI/VLSI computational speeds at significantly reduced power consumption levels.

  7. Digital processing of RF signals from optical frequency combs

    NASA Astrophysics Data System (ADS)

    Cizek, Martin; Smid, Radek; Buchta, Zdeněk.; Mikel, Břetislav; Lazar, Josef; Cip, Ondřej

    2013-01-01

    The presented work is focused on digital processing of beat note signals from a femtosecond optical frequency comb. The levels of mixing products of single spectral components of the comb with CW laser sources are usually very low compared to products of mixing all the comb components together. RF counters are more likely to measure the frequency of the strongest spectral component rather than a weak beat note. Proposed experimental digital signal processing system solves this problem by analyzing the whole spectrum of the output RF signal and using software defined radio (SDR) algorithms. Our efforts concentrate in two main areas: Firstly, using digital servo-loop techniques for locking free running continuous laser sources on single components of the fs comb spectrum. Secondly, we are experimenting with digital signal processing of the RF beat note spectrum produced by f-2f 1 technique used for assessing the offset and repetition frequencies of the comb, resulting in digital servo-loop stabilization of the fs comb. Software capable of computing and analyzing the beat-note RF spectrums using FFT and peak detection was developed. A SDR algorithm performing phase demodulation on the f- 2f signal is used as a regulation error signal source for a digital phase-locked loop stabilizing the offset frequency of the fs comb.

  8. Digital processing of signals from femtosecond combs

    NASA Astrophysics Data System (ADS)

    Čížek, Martin; Šmíd, Radek; Buchta, Zdeněk.; Mikel, Břetislav; Lazar, Josef; Číp, Ondrej

    2012-01-01

    The presented work is focused on digital processing of beat note signals from a femtosecond optical frequency comb. The levels of mixing products of single spectral components of the comb with CW laser sources are usually very low compared to products of mixing all the comb components together. RF counters are more likely to measure the frequency of the strongest spectral component rather than a weak beat note. Proposed experimental digital signal processing system solves this problem by analyzing the whole spectrum of the output RF signal and using software defined radio (SDR) algorithms. Our efforts concentrate in two main areas: Firstly, we are experimenting with digital signal processing of the RF beat note spectrum produced by f-2f 1 technique and with fully digital servo-loop stabilization of the fs comb. Secondly, we are using digital servo-loop techniques for locking free running continuous laser sources on single components of the fs comb spectrum. Software capable of computing and analyzing the beat-note RF spectrums using FFT and peak detection was developed. A SDR algorithm performing phase demodulation on the f- 2f signal is used as a regulation error signal source for a digital phase-locked loop stabilizing the offset and repetition frequencies of the fs comb.

  9. Parametric Techniques for Multichannel Signal Processing.

    DTIC Science & Technology

    1985-10-01

    AD-A165 649 PARAMETRIC TECHNIQUES FOR MULTICHANNEL SIGNAL PROCESSING(U) SYSTEM CONTROL TECHNOLOGY INC PALO RLTO CA B FRIEDLANDER OCT 85 5498-87 RRO...CONTRACT NO. DAAG29-83-C-0027 SYSTEMS CONTROL TECHNOLOGY, INC. DT1I? q4 1801 PAGE MILL ROAD ELI PALO ALTO, CALIFORNIA 94304EL C MAR 193 £4 APPROVED FOR...PROJECT, TASK Systems Control Technology, Inc. AREA & WORK UNIT NUMBERS 1801 Page Mill Road Palo Alto, CA 94304 II :ON?’ROLLING OFFICE NAME AND

  10. Unveiling the Biometric Potential of Finger-Based ECG Signals

    PubMed Central

    Lourenço, André; Silva, Hugo; Fred, Ana

    2011-01-01

    The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications. PMID:21837235

  11. Unveiling the biometric potential of finger-based ECG signals.

    PubMed

    Lourenço, André; Silva, Hugo; Fred, Ana

    2011-01-01

    The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.

  12. High-resolution correlation

    NASA Astrophysics Data System (ADS)

    Nelson, D. J.

    2007-09-01

    In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.

  13. A Software Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.

    2007-01-01

    Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.

  14. A NOVEL TECHNIQUE APPLYING SPECTRAL ESTIMATION TO JOHNSON NOISE THERMOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezell, N Dianne Bull; Britton Jr, Charles L; Roberts, Michael

    Johnson noise thermometry (JNT) is one of many important measurements used to monitor the safety levels and stability in a nuclear reactor. However, this measurement is very dependent on the electromagnetic environment. Properly removing unwanted electromagnetic interference (EMI) is critical for accurate drift free temperature measurements. The two techniques developed by Oak Ridge National Laboratory (ORNL) to remove transient and periodic EMI are briefly discussed in this document. Spectral estimation is a key component in the signal processing algorithm utilized for EMI removal and temperature calculation. Applying these techniques requires the simple addition of the electronics and signal processing tomore » existing resistive thermometers.« less

  15. Transient high frequency signal estimation: A model-based processing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, F.L.

    1985-03-22

    By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less

  16. A low-rank matrix recovery approach for energy efficient EEG acquisition for a wireless body area network.

    PubMed

    Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab

    2014-08-25

    We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.

  17. Fault detection and isolation in the challenging Tennessee Eastman process by using image processing techniques.

    PubMed

    Hajihosseini, Payman; Anzehaee, Mohammad Mousavi; Behnam, Behzad

    2018-05-22

    The early fault detection and isolation in industrial systems is a critical factor in preventing equipment damage. In the proposed method, instead of using the time signals of sensors, the 2D image obtained by placing these signals next to each other in a matrix has been used; and then a novel fault detection and isolation procedure has been carried out based on image processing techniques. Different features including texture, wavelet transform, mean and standard deviation of the image accompanied with MLP and RBF neural networks based classifiers have been used for this purpose. Obtained results indicate the notable efficacy and success of the proposed method in detecting and isolating faults of the Tennessee Eastman benchmark process and its superiority over previous techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A comparative analysis of frequency modulation threshold extension techniques

    NASA Technical Reports Server (NTRS)

    Arndt, G. D.; Loch, F. J.

    1970-01-01

    FM threshold extension for system performance improvement, comparing impulse noise elimination, correlation detection and delta modulation signal processing techniques implemented at demodulator output

  19. An Overview Of Wideband Signal Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Speiser, Jeffrey M.; Whitehouse, Harper J.

    1989-11-01

    This paper provides a unifying perspective for several narowband and wideband signal processing techniques. It considers narrowband ambiguity functions and Wigner-Ville distibutions, together with the wideband ambiguity function and several proposed approaches to a wideband version of the Wigner-Ville distribution (WVD). A unifying perspective is provided by the methodology of unitary representations and ray representations of transformation groups.

  20. Teaching Earth Signals Analysis Using the Java-DSP Earth Systems Edition: Modern and Past Climate Change

    ERIC Educational Resources Information Center

    Ramamurthy, Karthikeyan Natesan; Hinnov, Linda A.; Spanias, Andreas S.

    2014-01-01

    Modern data collection in the Earth Sciences has propelled the need for understanding signal processing and time-series analysis techniques. However, there is an educational disconnect in the lack of instruction of time-series analysis techniques in many Earth Science academic departments. Furthermore, there are no platform-independent freeware…

  1. Comparative of signal processing techniques for micro-Doppler signature extraction with automotive radar systems

    NASA Astrophysics Data System (ADS)

    Rodriguez-Hervas, Berta; Maile, Michael; Flores, Benjamin C.

    2014-05-01

    In recent years, the automotive industry has experienced an evolution toward more powerful driver assistance systems that provide enhanced vehicle safety. These systems typically operate in the optical and microwave regions of the electromagnetic spectrum and have demonstrated high efficiency in collision and risk avoidance. Microwave radar systems are particularly relevant due to their operational robustness under adverse weather or illumination conditions. Our objective is to study different signal processing techniques suitable for extraction of accurate micro-Doppler signatures of slow moving objects in dense urban environments. Selection of the appropriate signal processing technique is crucial for the extraction of accurate micro-Doppler signatures that will lead to better results in a radar classifier system. For this purpose, we perform simulations of typical radar detection responses in common driving situations and conduct the analysis with several signal processing algorithms, including short time Fourier Transform, continuous wavelet or Kernel based analysis methods. We take into account factors such as the relative movement between the host vehicle and the target, and the non-stationary nature of the target's movement. A comparison of results reveals that short time Fourier Transform would be the best approach for detection and tracking purposes, while the continuous wavelet would be the best suited for classification purposes.

  2. Fiber fault location utilizing traffic signal in optical network.

    PubMed

    Zhao, Tong; Wang, Anbang; Wang, Yuncai; Zhang, Mingjiang; Chang, Xiaoming; Xiong, Lijuan; Hao, Yi

    2013-10-07

    We propose and experimentally demonstrate a method for fault location in optical communication network. This method utilizes the traffic signal transmitted across the network as probe signal, and then locates the fault by correlation technique. Compared with conventional techniques, our method has a simple structure and low operation expenditure, because no additional device is used, such as light source, modulator and signal generator. The correlation detection in this method overcomes the tradeoff between spatial resolution and measurement range in pulse ranging technique. Moreover, signal extraction process can improve the location result considerably. Experimental results show that we achieve a spatial resolution of 8 cm and detection range of over 23 km with -8-dBm mean launched power in optical network based on synchronous digital hierarchy protocols.

  3. Digital Signal Processing in Acoustics--Part 2.

    ERIC Educational Resources Information Center

    Davies, H.; McNeill, D. J.

    1986-01-01

    Reviews the potential of a data acquisition system for illustrating the nature and significance of ideas in digital signal processing. Focuses on the fast Fourier transform and the utility of its two-channel format, emphasizing cross-correlation and its two-microphone technique of acoustic intensity measurement. Includes programing format. (ML)

  4. A Virtual Laboratory for Digital Signal Processing

    ERIC Educational Resources Information Center

    Dow, Chyi-Ren; Li, Yi-Hsung; Bai, Jin-Yu

    2006-01-01

    This work designs and implements a virtual digital signal processing laboratory, VDSPL. VDSPL consists of four parts: mobile agent execution environments, mobile agents, DSP development software, and DSP experimental platforms. The network capability of VDSPL is created by using mobile agent and wrapper techniques without modifying the source code…

  5. Displays, memories, and signal processing: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Articles on electronics systems and techniques were presented. The first section is on displays and other electro-optical systems; the second section is devoted to signal processing. The third section presented several new memory devices for digital equipment, including articles on holographic memories. The latest patent information available is also given.

  6. Evaluation of Ultrasonic Fiber Structure Extraction Technique Using Autopsy Specimens of Liver

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tadashi; Hirai, Kazuki; Yamada, Hiroyuki; Ebara, Masaaki; Hachiya, Hiroyuki

    2005-06-01

    It is very important to diagnose liver cirrhosis noninvasively and correctly. In our previous studies, we proposed a processing technique to detect changes in liver tissue in vivo. In this paper, we propose the evaluation of the relationship between liver disease and echo information using autopsy specimens of a human liver in vitro. It is possible to verify the function of a processing parameter clearly and to compare the processing result and the actual human liver tissue structure by in vitro experiment. In the results of our processing technique, information that did not obey a Rayleigh distribution from the echo signal of the autopsy liver specimens was extracted depending on changes in a particular processing parameter. The fiber tissue structure of the same specimen was extracted from a number of histological images of stained tissue. We constructed 3D structures using the information extracted from the echo signal and the fiber structure of the stained tissue and compared the two. By comparing the 3D structures, it is possible to evaluate the relationship between the information that does not obey a Rayleigh distribution of the echo signal and the fibrosis structure.

  7. Investigation of charge coupled device correlation techniques

    NASA Technical Reports Server (NTRS)

    Lampe, D. R.; Lin, H. C.; Shutt, T. J.

    1978-01-01

    Analog Charge Transfer Devices (CTD's) offer unique advantages to signal processing systems, which often have large development costs, making it desirable to define those devices which can be developed for general system's use. Such devices are best identified and developed early to give system's designers some interchangeable subsystem blocks, not requiring additional individual development for each new signal processing system. The objective of this work is to describe a discrete analog signal processing device with a reasonably broad system use and to implement its design, fabrication, and testing.

  8. Relationships between digital signal processing and control and estimation theory

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1978-01-01

    Research directions in the fields of digital signal processing and modern control and estimation theory are discussed. Stability theory, linear prediction and parameter identification, system synthesis and implementation, two-dimensional filtering, decentralized control and estimation, and image processing are considered in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the disciplines.

  9. Video-signal improvement using comb filtering techniques.

    NASA Technical Reports Server (NTRS)

    Arndt, G. D.; Stuber, F. M.; Panneton, R. J.

    1973-01-01

    Significant improvement in the signal-to-noise performance of television signals has been obtained through the application of comb filtering techniques. This improvement is achieved by removing the inherent redundancy in the television signal through linear prediction and by utilizing the unique noise-rejection characteristics of the receiver comb filter. Theoretical and experimental results describe the signal-to-noise ratio and picture-quality improvement obtained through the use of baseband comb filters and the implementation of a comb network as the loop filter in a phase-lock-loop demodulator. Attention is given to the fact that noise becomes correlated when processed by the receiver comb filter.

  10. Investigation of signal processing algorithms for an embedded microcontroller-based wearable pulse oximeter.

    PubMed

    Johnston, W S; Mendelson, Y

    2006-01-01

    Despite steady progress in the miniaturization of pulse oximeters over the years, significant challenges remain since advanced signal processing must be implemented efficiently in real-time by a relatively small size wearable device. The goal of this study was to investigate several potential digital signal processing algorithms for computing arterial oxygen saturation (SpO(2)) and heart rate (HR) in a battery-operated wearable reflectance pulse oximeter that is being developed in our laboratory for use by medics and first responders in the field. We found that a differential measurement approach, combined with a low-pass filter (LPF), yielded the most suitable signal processing technique for estimating SpO(2), while a signal derivative approach produced the most accurate HR measurements.

  11. The technique of entropy optimization in motor current signature analysis and its application in the fault diagnosis of gear transmission

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Liang, Lin; Liu, Fei; Xu, Guanghua; Luo, Ailing; Zhang, Sicong

    2012-05-01

    Nowadays, Motor Current Signature Analysis (MCSA) is widely used in the fault diagnosis and condition monitoring of machine tools. However, although the current signal has lower SNR (Signal Noise Ratio), it is difficult to identify the feature frequencies of machine tools from complex current spectrum that the feature frequencies are often dense and overlapping by traditional signal processing method such as FFT transformation. With the study in the Motor Current Signature Analysis (MCSA), it is found that the entropy is of importance for frequency identification, which is associated with the probability distribution of any random variable. Therefore, it plays an important role in the signal processing. In order to solve the problem that the feature frequencies are difficult to be identified, an entropy optimization technique based on motor current signal is presented in this paper for extracting the typical feature frequencies of machine tools which can effectively suppress the disturbances. Some simulated current signals were made by MATLAB, and a current signal was obtained from a complex gearbox of an iron works made in Luxembourg. In diagnosis the MCSA is combined with entropy optimization. Both simulated and experimental results show that this technique is efficient, accurate and reliable enough to extract the feature frequencies of current signal, which provides a new strategy for the fault diagnosis and the condition monitoring of machine tools.

  12. Monitoring of dispersed smoke-plume layers by determining locations of the data-point clusters

    NASA Astrophysics Data System (ADS)

    Kovalev, Vladimir; Wold, Cyle; Petkov, Alexander; Min Hao, Wei

    2018-04-01

    A modified data-processing technique of the signals recorded by zenith-directed lidar, which operates in smoke-polluted atmosphere, is discussed. The technique is based on simple transformations of the lidar backscatter signal and the determination of the spatial location of the data point clusters. The technique allows more reliable detection of the location of dispersed smoke layering. Examples of typical results obtained with lidar in a smokepolluted atmosphere are presented.

  13. Frequency domain laser velocimeter signal processor

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Murphy, R. Jay

    1991-01-01

    A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a signal processor capable of operating in the frequency domain maximizing the information obtainable from each signal burst. This allows a sophisticated approach to signal detection and processing, with a more accurate measurement of the chirp frequency resulting in an eight-fold increase in measurable signals over the present high-speed burst counter technology. Further, the required signal-to-noise ratio is reduced by a factor of 32, allowing measurements within boundary layers of wind tunnel models. Measurement accuracy is also increased up to a factor of five.

  14. Multipath interference test method using synthesized chirped signal from directly modulated DFB-LD with digital-signal-processing technique.

    PubMed

    Aida, Kazuo; Sugie, Toshihiko

    2011-12-12

    We propose a method of testing transmission fiber lines and distributed amplifiers. Multipath interference (MPI) is detected as a beat spectrum between a multipath signal and a direct signal using a synthesized chirped test signal with lightwave frequencies of f(1) and f(2) periodically emitted from a distributed feedback laser diode (DFB-LD). This chirped test pulse is generated using a directly modulated DFB-LD with a drive signal calculated using a digital signal processing technique (DSP). A receiver consisting of a photodiode and an electrical spectrum analyzer (ESA) detects a baseband power spectrum peak appearing at the frequency of the test signal frequency deviation (f(1)-f(2)) as a beat spectrum of self-heterodyne detection. Multipath interference is converted from the spectrum peak power. This method improved the minimum detectable MPI to as low as -78 dB. We discuss the detailed design and performance of the proposed test method, including a DFB-LD drive signal calculation algorithm with DSP for synthesis of the chirped test signal and experiments on single-mode fibers with discrete reflections. © 2011 Optical Society of America

  15. An Introduction to Data Analysis in Asteroseismology

    NASA Astrophysics Data System (ADS)

    Campante, Tiago L.

    A practical guide is presented to some of the main data analysis concepts and techniques employed contemporarily in the asteroseismic study of stars exhibiting solar-like oscillations. The subjects of digital signal processing and spectral analysis are introduced first. These concern the acquisition of continuous physical signals to be subsequently digitally analyzed. A number of specific concepts and techniques relevant to asteroseismology are then presented as we follow the typical workflow of the data analysis process, namely, the extraction of global asteroseismic parameters and individual mode parameters (also known as peak-bagging) from the oscillation spectrum.

  16. Deterring watermark collusion attacks using signal processing techniques

    NASA Astrophysics Data System (ADS)

    Lemma, Aweke N.; van der Veen, Michiel

    2007-02-01

    Collusion attack is a malicious watermark removal attack in which the hacker has access to multiple copies of the same content with different watermarks and tries to remove the watermark using averaging. In the literature, several solutions to collusion attacks have been reported. The main stream solutions aim at designing watermark codes that are inherently resistant to collusion attacks. The other approaches propose signal processing based solutions that aim at modifying the watermarked signals in such a way that averaging multiple copies of the content leads to a significant degradation of the content quality. In this paper, we present signal processing based technique that may be deployed for deterring collusion attacks. We formulate the problem in the context of electronic music distribution where the content is generally available in the compressed domain. Thus, we first extend the collusion resistance principles to bit stream signals and secondly present experimental based analysis to estimate a bound on the maximum number of modified versions of a content that satisfy good perceptibility requirement on one hand and destructive averaging property on the other hand.

  17. A Comparison of Inductive Sensors in the Characterization of Partial Discharges and Electrical Noise Using the Chromatic Technique.

    PubMed

    Ardila-Rey, Jorge Alfredo; Montaña, Johny; de Castro, Bruno Albuquerque; Schurch, Roger; Covolan Ulson, José Alfredo; Muhammad-Sukki, Firdaus; Bani, Nurul Aini

    2018-03-29

    Partial discharges (PDs) are one of the most important classes of ageing processes that occur within electrical insulation. PD detection is a standardized technique to qualify the state of the insulation in electric assets such as machines and power cables. Generally, the classical phase-resolved partial discharge (PRPD) patterns are used to perform the identification of the type of PD source when they are related to a specific degradation process and when the electrical noise level is low compared to the magnitudes of the PD signals. However, in practical applications such as measurements carried out in the field or in industrial environments, several PD sources and large noise signals are usually present simultaneously. In this study, three different inductive sensors have been used to evaluate and compare their performance in the detection and separation of multiple PD sources by applying the chromatic technique to each of the measured signals.

  18. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  19. Automated Monitoring with a BSP Fault-Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L.; Herzog, James P.

    2003-01-01

    The figure schematically illustrates a method and procedure for automated monitoring of an asset, as well as a hardware- and-software system that implements the method and procedure. As used here, asset could signify an industrial process, power plant, medical instrument, aircraft, or any of a variety of other systems that generate electronic signals (e.g., sensor outputs). In automated monitoring, the signals are digitized and then processed in order to detect faults and otherwise monitor operational status and integrity of the monitored asset. The major distinguishing feature of the present method is that the fault-detection function is implemented by use of a Bayesian sequential probability (BSP) technique. This technique is superior to other techniques for automated monitoring because it affords sensitivity, not only to disturbances in the mean values, but also to very subtle changes in the statistical characteristics (variance, skewness, and bias) of the monitored signals.

  20. Acoustic impulse response method as a source of undergraduate research projects and advanced laboratory experiments.

    PubMed

    Robertson, W M; Parker, J M

    2012-03-01

    A straightforward and inexpensive implementation of acoustic impulse response measurement is described utilizing the signal processing technique of coherent averaging. The technique is capable of high signal-to-noise measurements with personal computer data acquisition equipment, an amplifier/speaker, and a high quality microphone. When coupled with simple waveguide test systems fabricated from commercial PVC plumbing pipe, impulse response measurement has proven to be ideal for undergraduate research projects-often of publishable quality-or for advanced laboratory experiments. The technique provides important learning objectives for science or engineering students in areas such as interfacing and computer control of experiments; analog-to-digital conversion and sampling; time and frequency analysis using Fourier transforms; signal processing; and insight into a variety of current research areas such as acoustic bandgap materials, acoustic metamaterials, and fast and slow wave manipulation. © 2012 Acoustical Society of America

  1. Digital audio watermarking using moment-preserving thresholding

    NASA Astrophysics Data System (ADS)

    Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong

    2007-09-01

    The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.

  2. Radar transponder apparatus and signal processing technique

    DOEpatents

    Axline, Jr., Robert M.; Sloan, George R.; Spalding, Richard E.

    1996-01-01

    An active, phase-coded, time-grating transponder and a synthetic-aperture radar (SAR) and signal processor means, in combination, allow the recognition and location of the transponder (tag) in the SAR image and allow communication of information messages from the transponder to the SAR. The SAR is an illuminating radar having special processing modifications in an image-formation processor to receive an echo from a remote transponder, after the transponder receives and retransmits the SAR illuminations, and to enhance the transponder's echo relative to surrounding ground clutter by recognizing special transponder modulations from phase-shifted from the transponder retransmissions. The remote radio-frequency tag also transmits information to the SAR through a single antenna that also serves to receive the SAR illuminations. Unique tag-modulation and SAR signal processing techniques, in combination, allow the detection and precise geographical location of the tag through the reduction of interfering signals from ground clutter, and allow communication of environmental and status information from said tag to be communicated to said SAR.

  3. Radar transponder apparatus and signal processing technique

    DOEpatents

    Axline, R.M. Jr.; Sloan, G.R.; Spalding, R.E.

    1996-01-23

    An active, phase-coded, time-grating transponder and a synthetic-aperture radar (SAR) and signal processor means, in combination, allow the recognition and location of the transponder (tag) in the SAR image and allow communication of information messages from the transponder to the SAR. The SAR is an illuminating radar having special processing modifications in an image-formation processor to receive an echo from a remote transponder, after the transponder receives and retransmits the SAR illuminations, and to enhance the transponder`s echo relative to surrounding ground clutter by recognizing special transponder modulations from phase-shifted from the transponder retransmissions. The remote radio-frequency tag also transmits information to the SAR through a single antenna that also serves to receive the SAR illuminations. Unique tag-modulation and SAR signal processing techniques, in combination, allow the detection and precise geographical location of the tag through the reduction of interfering signals from ground clutter, and allow communication of environmental and status information from said tag to be communicated to said SAR. 4 figs.

  4. ADAPTIVE WATER SENSOR SIGNAL PROCESSING: EXPERIMENTAL RESULTS AND IMPLICATIONS FOR ONLINE CONTAMINANT WARNING SYSTEMS

    EPA Science Inventory

    A contaminant detection technique and its optimization algorithms have two principal functions. One is the adaptive signal treatment that suppresses background noise and enhances contaminant signals, leading to a promising detection of water quality changes at a false rate as low...

  5. High-performance wavelet engine

    NASA Astrophysics Data System (ADS)

    Taylor, Fred J.; Mellot, Jonathon D.; Strom, Erik; Koren, Iztok; Lewis, Michael P.

    1993-11-01

    Wavelet processing has shown great promise for a variety of image and signal processing applications. Wavelets are also among the most computationally expensive techniques in signal processing. It is demonstrated that a wavelet engine constructed with residue number system arithmetic elements offers significant advantages over commercially available wavelet accelerators based upon conventional arithmetic elements. Analysis is presented predicting the dynamic range requirements of the reported residue number system based wavelet accelerator.

  6. Detection of buried mines with seismic sonar

    NASA Astrophysics Data System (ADS)

    Muir, Thomas G.; Baker, Steven R.; Gaghan, Frederick E.; Fitzpatrick, Sean M.; Hall, Patrick W.; Sheetz, Kraig E.; Guy, Jeremie

    2003-10-01

    Prior research on seismo-acoustic sonar for detection of buried targets [J. Acoust. Soc. Am. 103, 2333-2343 (1998)] has continued with examination of the target strengths of buried test targets as well as targets of interest, and has also examined detection and confirmatory classification of these, all using arrays of seismic sources and receivers as well as signal processing techniques to enhance target recognition. The target strengths of two test targets (one a steel gas bottle, the other an aluminum powder keg), buried in a sand beach, were examined as a function of internal mass load, to evaluate theory developed for seismic sonar target strength [J. Acoust. Soc. Am. 103, 2344-2353 (1998)]. The detection of buried naval and military targets of interest was achieved with an array of 7 shaker sources and 5, three-axis seismometers, at a range of 5 m. Vector polarization filtering was the main signal processing technique for detection. It capitalizes on the fact that the vertical and horizontal components in Rayleigh wave echoes are 90 deg out of phase, enabling complex variable processing to obtain the imaginary component of the signal power versus time, which is unique to Rayleigh waves. Gabor matrix processing of this signal component was the main technique used to determine whether the target was man-made or just a natural target in the environment. [Work sponsored by ONR.

  7. Mandarin Chinese Tone Identification in Cochlear Implants: Predictions from Acoustic Models

    PubMed Central

    Morton, Kenneth D.; Torrione, Peter A.; Throckmorton, Chandra S.; Collins, Leslie M.

    2015-01-01

    It has been established that current cochlear implants do not supply adequate spectral information for perception of tonal languages. Comprehension of a tonal language, such as Mandarin Chinese, requires recognition of lexical tones. New strategies of cochlear stimulation such as variable stimulation rate and current steering may provide the means of delivering more spectral information and thus may provide the auditory fine structure required for tone recognition. Several cochlear implant signal processing strategies are examined in this study, the continuous interleaved sampling (CIS) algorithm, the frequency amplitude modulation encoding (FAME) algorithm, and the multiple carrier frequency algorithm (MCFA). These strategies provide different types and amounts of spectral information. Pattern recognition techniques can be applied to data from Mandarin Chinese tone recognition tasks using acoustic models as a means of testing the abilities of these algorithms to transmit the changes in fundamental frequency indicative of the four lexical tones. The ability of processed Mandarin Chinese tones to be correctly classified may predict trends in the effectiveness of different signal processing algorithms in cochlear implants. The proposed techniques can predict trends in performance of the signal processing techniques in quiet conditions but fail to do so in noise. PMID:18706497

  8. Optical frequency upconversion technique for transmission of wireless MIMO-type signals over optical fiber.

    PubMed

    Shaddad, R Q; Mohammad, A B; Al-Gailani, S A; Al-Hetar, A M

    2014-01-01

    The optical fiber is well adapted to pass multiple wireless signals having different carrier frequencies by using radio-over-fiber (ROF) technique. However, multiple wireless signals which have the same carrier frequency cannot propagate over a single optical fiber, such as wireless multi-input multi-output (MIMO) signals feeding multiple antennas in the fiber wireless (FiWi) system. A novel optical frequency upconversion (OFU) technique is proposed to solve this problem. In this paper, the novel OFU approach is used to transmit three wireless MIMO signals over a 20 km standard single mode fiber (SMF). The OFU technique exploits one optical source to produce multiple wavelengths by delivering it to a LiNbO3 external optical modulator. The wireless MIMO signals are then modulated by LiNbO3 optical intensity modulators separately using the generated optical carriers from the OFU process. These modulators use the optical single-sideband with carrier (OSSB+C) modulation scheme to optimize the system performance against the fiber dispersion effect. Each wireless MIMO signal is with a 2.4 GHz or 5 GHz carrier frequency, 1 Gb/s data rate, and 16-quadrature amplitude modulation (QAM). The crosstalk between the wireless MIMO signals is highly suppressed, since each wireless MIMO signal is carried on a specific optical wavelength.

  9. Tracking radar advanced signal processing and computing for Kwajalein Atoll (KA) application

    NASA Astrophysics Data System (ADS)

    Cottrill, Stanley D.

    1992-11-01

    Two means are examined whereby the operations of KMR during mission execution may be improved through the introduction of advanced signal processing techniques. In the first approach, the addition of real time coherent signal processing technology to the FPQ-19 radar is considered. In the second approach, the incorporation of the MMW radar, with its very fine range precision, to the MMS system is considered. The former appears very attractive and a Phase 2 SBIR has been proposed. The latter does not appear promising enough to warrant further development.

  10. Noncoherent sampling technique for communications parameter estimations

    NASA Technical Reports Server (NTRS)

    Su, Y. T.; Choi, H. J.

    1985-01-01

    This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.

  11. Ultrasonic sensor based defect detection and characterisation of ceramics.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh; Zhang, Tonzhua; Crouch, Ian

    2014-01-01

    Ceramic tiles, used in body armour systems, are currently inspected visually offline using an X-ray technique that is both time consuming and very expensive. The aim of this research is to develop a methodology to detect, locate and classify various manufacturing defects in Reaction Sintered Silicon Carbide (RSSC) ceramic tiles, using an ultrasonic sensing technique. Defects such as free silicon, un-sintered silicon carbide material and conventional porosity are often difficult to detect using conventional X-radiography. An alternative inspection system was developed to detect defects in ceramic components using an Artificial Neural Network (ANN) based signal processing technique. The inspection methodology proposed focuses on pre-processing of signals, de-noising, wavelet decomposition, feature extraction and post-processing of the signals for classification purposes. This research contributes to developing an on-line inspection system that would be far more cost effective than present methods and, moreover, assist manufacturers in checking the location of high density areas, defects and enable real time quality control, including the implementation of accept/reject criteria. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Application of HFCT and UHF Sensors in On-Line Partial Discharge Measurements for Insulation Diagnosis of High Voltage Equipment

    PubMed Central

    Álvarez, Fernando; Garnacho, Fernando; Ortego, Javier; Sánchez-Urán, Miguel Ángel

    2015-01-01

    Partial discharge (PD) measurements provide valuable information for assessing the condition of high voltage (HV) insulation systems, contributing to their quality assurance. Different PD measuring techniques have been developed in the last years specially designed to perform on-line measurements. Non-conventional PD methods operating in high frequency bands are usually used when this type of tests are carried out. In PD measurements the signal acquisition, the subsequent signal processing and the capability to obtain an accurate diagnosis are conditioned by the selection of a suitable detection technique and by the implementation of effective signal processing tools. This paper proposes an optimized electromagnetic detection method based on the combined use of wideband PD sensors for measurements performed in the HF and UHF frequency ranges, together with the implementation of powerful processing tools. The effectiveness of the measuring techniques proposed is demonstrated through an example, where several PD sources are measured simultaneously in a HV installation consisting of a cable system connected by a plug-in terminal to a gas insulated substation (GIS) compartment. PMID:25815452

  13. Optical modulation techniques for analog signal processing and CMOS compatible electro-optic modulation

    NASA Astrophysics Data System (ADS)

    Gill, Douglas M.; Rasras, Mahmoud; Tu, Kun-Yii; Chen, Young-Kai; White, Alice E.; Patel, Sanjay S.; Carothers, Daniel; Pomerene, Andrew; Kamocsai, Robert; Beattie, James; Kopa, Anthony; Apsel, Alyssa; Beals, Mark; Mitchel, Jurgen; Liu, Jifeng; Kimerling, Lionel C.

    2008-02-01

    Integrating electronic and photonic functions onto a single silicon-based chip using techniques compatible with mass-production CMOS electronics will enable new design paradigms for existing system architectures and open new opportunities for electro-optic applications with the potential to dramatically change the management, cost, footprint, weight, and power consumption of today's communication systems. While broadband analog system applications represent a smaller volume market than that for digital data transmission, there are significant deployments of analog electro-optic systems for commercial and military applications. Broadband linear modulation is a critical building block in optical analog signal processing and also could have significant applications in digital communication systems. Recently, broadband electro-optic modulators on a silicon platform have been demonstrated based on the plasma dispersion effect. The use of the plasma dispersion effect within a CMOS compatible waveguide creates new challenges and opportunities for analog signal processing since the index and propagation loss change within the waveguide during modulation. We will review the current status of silicon-based electrooptic modulators and also linearization techniques for optical modulation.

  14. Architecture and settings optimization procedure of a TES frequency domain multiplexed readout firmware

    NASA Astrophysics Data System (ADS)

    Clenet, A.; Ravera, L.; Bertrand, B.; den Hartog, R.; Jackson, B.; van Leeuwen, B.-J.; van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.

    2014-11-01

    IRAP is developing the readout electronics of the SPICA-SAFARI's TES bolometer arrays. Based on the frequency domain multiplexing technique the readout electronics provides the AC-signals to voltage-bias the detectors; it demodulates the data; and it computes a feedback to linearize the detection chain. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several μ s) and with fast signals (i.e. frequency carriers of the order of 5 MHz). To optimize the power consumption we took advantage of the reduced science signal bandwidth to decouple the signal sampling frequency and the data processing rate. This technique allowed a reduction of the power consumption of the circuit by a factor of 10. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed, to operate an array TES one has to properly define about 21000 parameters. We defined a set of procedures to automatically characterize these parameters and find out the optimal settings.

  15. Nanometer-scale displacement sensing using self-mixing interferometry with a correlation-based signal processing technique

    NASA Astrophysics Data System (ADS)

    Hast, J.; Okkonen, M.; Heikkinen, H.; Krehut, L.; Myllylä, R.

    2006-06-01

    A self-mixing interferometer is proposed to measure nanometre-scale optical path length changes in the interferometer's external cavity. As light source, the developed technique uses a blue emitting GaN laser diode. An external reflector, a silicon mirror, driven by a piezo nanopositioner is used to produce an interference signal which is detected with the monitor photodiode of the laser diode. Changing the optical path length of the external cavity introduces a phase difference to the interference signal. This phase difference is detected using a signal processing algorithm based on Pearson's correlation coefficient and cubic spline interpolation techniques. The results show that the average deviation between the measured and actual displacements of the silicon mirror is 3.1 nm in the 0-110 nm displacement range. Moreover, the measured displacements follow linearly the actual displacement of the silicon mirror. Finally, the paper considers the effects produced by the temperature and current stability of the laser diode as well as dispersion effects in the external cavity of the interferometer. These reduce the sensor's measurement accuracy especially in long-term measurements.

  16. Analysis of a crossed Bragg cell acousto-optical spectrometer for SETI

    NASA Technical Reports Server (NTRS)

    Gulkis, S.

    1989-01-01

    The search for radio signals from extraterrestrial intelligent beings (SETI) requires the use of large instantaneous bandwidth (500 MHz) and high resolution (20 Hz) spectrometers. Digital systems with a high degree of modularity can be used to provide this capability, and this method has been widely discussed. Another technique for meeting the SETI requirement is to use a crossed Bragg cell spectrometer as described by Psaltis and Casasent. This technique makes use of the Folded Spectrum concept, introduced by Thomas. The Folded Spectrum is a 2-D Fourier Transform of a raster scanned 1-D signal. It is directly related to the long 1-D spectrum of the original signal and is ideally suited for optical signal processing. The folded spectrum technique has received little attention to date, primarily because early systems made use of photographic film which are unsuitable for the real time data analysis and voluminous data requirements of SETI. An analysis of the crossed Bragg cell spectrometer is presented as a method to achieve the spectral processing requirements for SETI. Systematic noise contributions unique to the Bragg cell system will be discussed.

  17. Analysis of a crossed Bragg cell acousto-optical spectrometer for SETI.

    PubMed

    Gulkis, S

    1989-01-01

    The search for radio signals from extraterrestrial intelligent beings (SETI) requires the use of large instantaneous bandwidth (500 MHz) and high resolution (20 Hz) spectrometers. Digital systems with a high degree of modularity can be used to provide this capability, and this method has been widely discussed. Another technique for meeting the SETI requirement is to use a crossed Bragg cell spectrometer as described by Psaltis and Casasent. This technique makes use of the Folded Spectrum concept, introduced by Thomas. The Folded Spectrum is a 2-D Fourier Transform of a raster scanned 1-D signal. It is directly related to the long 1-D spectrum of the original signal and is ideally suited for optical signal processing. The folded spectrum technique has received little attention to date, primarily because early systems made use of photographic film which are unsuitable for the real time data analysis and voluminous data requirements of SETI. An analysis of the crossed Bragg cell spectrometer is presented as a method to achieve the spectral processing requirements for SETI. Systematic noise contributions unique to the Bragg cell system will be discussed.

  18. Analysis of a crossed Bragg cell acousto-optical spectrometer for SETI

    NASA Astrophysics Data System (ADS)

    Gulkis, Samuel

    The search for radio signals from extraterrestrial intelligent beings (SETI) requires the use of large instantaneous bandwidth (500 MHz) and high resolution (20 Hz) spectrometers. Digital systems with a high degree of modularity can be used to provide this capability, and this method has been widely discussed. Another technique for meeting the SETI requirement is to use a crossed Bragg cell spectrometer as described by Psaltis and Casasent. This technique makes use of the Folded Spectrum concept, introduced by Thomas. The Folded Spectrum is a 2-D Fourier Transform of a raster scanned 1-D signal. It is directly related to the long 1-D spectrum of the original signal and is ideally suited for optical signal processing. The folded spectrum technique has received little attention to date, primarily because early systems made use of photographic film which are unsuitable for the real time data analysis and voluminous data requirements of SETI. An analysis of the crossed Bragg cell spectrometer is presented as a method to achieve the spectral processing requirements for SETI. Systematic noise contributions unique to the Bragg cell system will be discussed.

  19. Phase processing for quantitative susceptibility mapping of regions with large susceptibility and lack of signal.

    PubMed

    Fortier, Véronique; Levesque, Ives R

    2018-06-01

    Phase processing impacts the accuracy of quantitative susceptibility mapping (QSM). Techniques for phase unwrapping and background removal have been proposed and demonstrated mostly in brain. In this work, phase processing was evaluated in the context of large susceptibility variations (Δχ) and negligible signal, in particular for susceptibility estimation using the iterative phase replacement (IPR) algorithm. Continuous Laplacian, region-growing, and quality-guided unwrapping were evaluated. For background removal, Laplacian boundary value (LBV), projection onto dipole fields (PDF), sophisticated harmonic artifact reduction for phase data (SHARP), variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP), regularization enabled sophisticated harmonic artifact reduction for phase data (RESHARP), and 3D quadratic polynomial field removal were studied. Each algorithm was quantitatively evaluated in simulation and qualitatively in vivo. Additionally, IPR-QSM maps were produced to evaluate the impact of phase processing on the susceptibility in the context of large Δχ with negligible signal. Quality-guided unwrapping was the most accurate technique, whereas continuous Laplacian performed poorly in this context. All background removal algorithms tested resulted in important phase inaccuracies, suggesting that techniques used for brain do not translate well to situations where large Δχ and no or low signal are expected. LBV produced the smallest errors, followed closely by PDF. Results suggest that quality-guided unwrapping should be preferred, with PDF or LBV for background removal, for QSM in regions with large Δχ and negligible signal. This reduces the susceptibility inaccuracy introduced by phase processing. Accurate background removal remains an open question. Magn Reson Med 79:3103-3113, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Two-dimensional signal processing with application to image restoration

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1974-01-01

    A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.

  1. Ultrabroadband phased-array radio frequency (RF) receivers based on optical techniques

    NASA Astrophysics Data System (ADS)

    Overmiller, Brock M.; Schuetz, Christopher A.; Schneider, Garrett; Murakowski, Janusz; Prather, Dennis W.

    2014-03-01

    Military operations require the ability to locate and identify electronic emissions in the battlefield environment. However, recent developments in radio detection and ranging (RADAR) and communications technology are making it harder to effectively identify such emissions. Phased array systems aid in discriminating emitters in the scene by virtue of their relatively high-gain beam steering and nulling capabilities. For the purpose of locating emitters, we present an approach realize a broadband receiver based on optical processing techniques applied to the response of detectors in conformal antenna arrays. This approach utilizes photonic techniques that enable us to capture, route, and process the incoming signals. Optical modulators convert the incoming signals up to and exceeding 110 GHz with appreciable conversion efficiency and route these signals via fiber optics to a central processing location. This central processor consists of a closed loop phase control system which compensates for phase fluctuations induced on the fibers due to thermal or acoustic vibrations as well as an optical heterodyne approach for signal conversion down to baseband. Our optical heterodyne approach uses injection-locked paired optical sources to perform heterodyne downconversion/frequency identification of the detected emission. Preliminary geolocation and frequency identification testing of electronic emissions has been performed demonstrating the capabilities of our RF receiver.

  2. Processing of pulse oximeter signals using adaptive filtering and autocorrelation to isolate perfusion and oxygenation components

    NASA Astrophysics Data System (ADS)

    Ibey, Bennett; Subramanian, Hariharan; Ericson, Nance; Xu, Weijian; Wilson, Mark; Cote, Gerard L.

    2005-03-01

    A blood perfusion and oxygenation sensor has been developed for in situ monitoring of transplanted organs. In processing in situ data, motion artifacts due to increased perfusion can create invalid oxygenation saturation values. In order to remove the unwanted artifacts from the pulsatile signal, adaptive filtering was employed using a third wavelength source centered at 810nm as a reference signal. The 810 nm source resides approximately at the isosbestic point in the hemoglobin absorption curve where the absorbance of light is nearly equal for oxygenated and deoxygenated hemoglobin. Using an autocorrelation based algorithm oxygenation saturation values can be obtained without the need for large sampling data sets allowing for near real-time processing. This technique has been shown to be more reliable than traditional techniques and proven to adequately improve the measurement of oxygenation values in varying perfusion states.

  3. Real-time, in situ monitoring of nanoporation using electric field-induced acoustic signal

    NASA Astrophysics Data System (ADS)

    Zarafshani, Ali; Faiz, Rowzat; Samant, Pratik; Zheng, Bin; Xiang, Liangzhong

    2018-02-01

    The use of nanoporation in reversible or irreversible electroporation, e.g. cancer ablation, is rapidly growing. This technique uses an ultra-short and intense electric pulse to increase the membrane permeability, allowing non-permeant drugs and genes access to the cytosol via nanopores in the plasma membrane. It is vital to create a real-time in situ monitoring technique to characterize this process and answer the need created by the successful electroporation procedure of cancer treatment. All suggested monitoring techniques for electroporation currently are for pre-and post-stimulation exposure with no real-time monitoring during electric field exposure. This study was aimed at developing an innovative technology for real-time in situ monitoring of electroporation based on the typical cell exposure-induced acoustic emissions. The acoustic signals are the result of the electric field, which itself can be used in realtime to characterize the process of electroporation. We varied electric field distribution by varying the electric pulse from 1μ - 100ns and varying the voltage intensity from 0 - 1.2ܸ݇ to energize two electrodes in a bi-polar set-up. An ultrasound transducer was used for collecting acoustic signals around the subject under test. We determined the relative location of the acoustic signals by varying the position of the electrodes relative to the transducer and varying the electric field distribution between the electrodes to capture a variety of acoustic signals. Therefore, the electric field that is utilized in the nanoporation technique also produces a series of corresponding acoustic signals. This offers a novel imaging technique for the real-time in situ monitoring of electroporation that may directly improve treatment efficiency.

  4. Vibration based condition monitoring of a multistage epicyclic gearbox in lifting cranes

    NASA Astrophysics Data System (ADS)

    Assaad, Bassel; Eltabach, Mario; Antoni, Jérôme

    2014-01-01

    This paper proposes a model-based technique for detecting wear in a multistage planetary gearbox used by lifting cranes. The proposed method establishes a vibration signal model which deals with cyclostationary and autoregressive models. First-order cyclostationarity is addressed by the analysis of the time synchronous average (TSA) of the angular resampled vibration signal. Then an autoregressive model (AR) is applied to the TSA part in order to extract a residual signal containing pertinent fault signatures. The paper also explores a number of methods commonly used in vibration monitoring of planetary gearboxes, in order to make comparisons. In the experimental part of this study, these techniques are applied to accelerated lifetime test bench data for the lifting winch. After processing raw signals recorded with an accelerometer mounted on the outside of the gearbox, a number of condition indicators (CIs) are derived from the TSA signal, the residual autoregressive signal and other signals derived using standard signal processing methods. The goal is to check the evolution of the CIs during the accelerated lifetime test (ALT). Clarity and fluctuation level of the historical trends are finally considered as a criteria for comparing between the extracted CIs.

  5. A labview-based GUI for the measurement of otoacoustic emissions.

    PubMed

    Wu, Ye; McNamara, D M; Ziarani, A K

    2006-01-01

    This paper presents the outcome of a software development project aimed at creating a stand-alone user-friendly signal processing algorithm for the estimation of distortion product otoacoustic emission (OAE) signals. OAE testing is one of the most commonly used methods of first screening of newborns' hearing. Most of the currently available commercial devices rely upon averaging long strings of data and subsequent discrete Fourier analysis to estimate low level OAE signals from within the background noise in the presence of the strong stimuli. The main shortcoming of the presently employed technology is the need for long measurement time and its low noise immunity. The result of the software development project presented here is a graphical user interface (GUI) module that implements a recently introduced adaptive technique of OAE signal estimation. This software module is easy to use and is freely disseminated on the Internet for the use of the hearing research community. This GUI module allows loading of the a priori recorded OAE signals into the workspace, and provides the user with interactive instructions for the OAE signal estimation. Moreover, the user can generate simulated OAE signals to objectively evaluate the performance capability of the implemented signal processing technique.

  6. Single-Molecule Imaging of Cellular Signaling

    NASA Astrophysics Data System (ADS)

    De Keijzer, Sandra; Snaar-Jagalska, B. Ewa; Spaink, Herman P.; Schmidt, Thomas

    Single-molecule microscopy is an emerging technique to understand the function of a protein in the context of its natural environment. In our laboratory this technique has been used to study the dynamics of signal transduction in vivo. A multitude of signal transduction cascades are initiated by interactions between proteins in the plasma membrane. These cascades start by binding a ligand to its receptor, thereby activating downstream signaling pathways which finally result in complex cellular responses. To fully understand these processes it is important to study the initial steps of the signaling cascades. Standard biological assays mostly call for overexpression of the proteins and high concentrations of ligand. This sets severe limits to the interpretation of, for instance, the time-course of the observations, given the large temporal spread caused by the diffusion-limited binding processes. Methods and limitations of single-molecule microscopy for the study of cell signaling are discussed on the example of the chemotactic signaling of the slime-mold Dictyostelium discoideum. Single-molecule studies, as reviewed in this chapter, appear to be one of the essential methodologies for the full spatiotemporal clarification of cellular signaling, one of the ultimate goals in cell biology.

  7. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  8. A Method for Implementing Force-Limited Vibration Control

    NASA Technical Reports Server (NTRS)

    Worth, Daniel B.

    1997-01-01

    NASA/GSFC has implemented force-limited vibration control on a controller which can only accept one profile. The method uses a personal computer based digital signal processing board to convert force and/or moment signals into what appears to he an acceleration signal to the controller. This technique allows test centers with older controllers to use the latest force-limited control techniques for random vibration testing. The paper describes the method, hardware, and test procedures used. An example from a test performed at NASA/GSFC is used as a guide.

  9. Surface Electromyography Signal Processing and Classification Techniques

    PubMed Central

    Chowdhury, Rubana H.; Reaz, Mamun B. I.; Ali, Mohd Alauddin Bin Mohd; Bakar, Ashrif A. A.; Chellappan, Kalaivani; Chang, Tae. G.

    2013-01-01

    Electromyography (EMG) signals are becoming increasingly important in many applications, including clinical/biomedical, prosthesis or rehabilitation devices, human machine interactions, and more. However, noisy EMG signals are the major hurdles to be overcome in order to achieve improved performance in the above applications. Detection, processing and classification analysis in electromyography (EMG) is very desirable because it allows a more standardized and precise evaluation of the neurophysiological, rehabitational and assistive technological findings. This paper reviews two prominent areas; first: the pre-processing method for eliminating possible artifacts via appropriate preparation at the time of recording EMG signals, and second: a brief explanation of the different methods for processing and classifying EMG signals. This study then compares the numerous methods of analyzing EMG signals, in terms of their performance. The crux of this paper is to review the most recent developments and research studies related to the issues mentioned above. PMID:24048337

  10. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, K.C.; Hoyer, K.K.; Humenik, K.E.

    1995-10-17

    A method and system for monitoring an industrial process and a sensor are disclosed. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.

  11. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, K.C.; Hoyer, K.K.; Humenik, K.E.

    1997-05-13

    A method and system are disclosed for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.

  12. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, Kenneth C.; Hoyer, Kristin K.; Humenik, Keith E.

    1995-01-01

    A method and system for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.

  13. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, Kenneth C.; Hoyer, Kristin K.; Humenik, Keith E.

    1997-01-01

    A method and system for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.

  14. Monitoring temperatures in coal conversion and combustion processes via ultrasound

    NASA Astrophysics Data System (ADS)

    Gopalsami, N.; Raptis, A. C.; Mulcahey, T. P.

    1980-02-01

    The state of the art of instrumentation for monitoring temperatures in coal conversion and combustion systems is examined. The instrumentation types studied include thermocouples, radiation pyrometers, and acoustical thermometers. The capabilities and limitations of each type are reviewed. A feasibility study of the ultrasonic thermometry is described. A mathematical model of a pulse-echo ultrasonic temperature measurement system is developed using linear system theory. The mathematical model lends itself to the adaptation of generalized correlation techniques for the estimation of propagation delays. Computer simulations are made to test the efficacy of the signal processing techniques for noise-free as well as noisy signals. Based on the theoretical study, acoustic techniques to measure temperature in reactors and combustors are feasible.

  15. Improving Signal Detection using Allan and Theo Variances

    NASA Astrophysics Data System (ADS)

    Hardy, Andrew; Broering, Mark; Korsch, Wolfgang

    2017-09-01

    Precision measurements often deal with small signals buried within electronic noise. Extracting these signals can be enhanced through digital signal processing. Improving these techniques provide signal to noise ratios. Studies presently performed at the University of Kentucky are utilizing the electro-optic Kerr effect to understand cell charging effects within ultra-cold neutron storage cells. This work is relevant for the neutron electric dipole moment (nEDM) experiment at Oak Ridge National Laboratory. These investigations, and future investigations in general, will benefit from the illustrated improved analysis techniques. This project will showcase various methods for determining the optimum duration that data should be gathered for. Typically, extending the measuring time of an experimental run reduces the averaged noise. However, experiments also encounter drift due to fluctuations which mitigate the benefits of extended data gathering. Through comparing FFT averaging techniques, along with Allan and Theo variance measurements, quantifiable differences in signal detection will be presented. This research is supported by DOE Grants: DE-FG02-99ER411001, DE-AC05-00OR22725.

  16. Comparison of digital signal-signal beat interference compensation techniques in direct-detection subcarrier modulation systems.

    PubMed

    Li, Zhe; Erkilinc, M Sezer; Galdino, Lidia; Shi, Kai; Thomsen, Benn C; Bayvel, Polina; Killey, Robert I

    2016-12-12

    Single-polarization direct-detection transceivers may offer advantages compared to digital coherent technology for some metro, back-haul, access and inter-data center applications since they offer low-cost and complexity solutions. However, a direct-detection receiver introduces nonlinearity upon photo detection, since it is a square-law device, which results in signal distortion due to signal-signal beat interference (SSBI). Consequently, it is desirable to develop effective and low-cost SSBI compensation techniques to improve the performance of such transceivers. In this paper, we compare the performance of a number of recently proposed digital signal processing-based SSBI compensation schemes, including the use of single- and two-stage linearization filters, an iterative linearization filter and a SSBI estimation and cancellation technique. Their performance is assessed experimentally using a 7 × 25 Gb/s wavelength division multiplexed (WDM) single-sideband 16-QAM Nyquist-subcarrier modulation system operating at a net information spectral density of 2.3 (b/s)/Hz.

  17. Envelope filter sequence to delete blinks and overshoots.

    PubMed

    Merino, Manuel; Gómez, Isabel María; Molina, Alberto J

    2015-05-30

    Eye movements have been used in control interfaces and as indicators of somnolence, workload and concentration. Different techniques can be used to detect them: we focus on the electrooculogram (EOG) in which two kinds of interference occur: blinks and overshoots. While they both draw bell-shaped waveforms, blinks are caused by the eyelid, whereas overshoots occur due to target localization error and are placed on saccade. They need to be extracted from the EOG to increase processing effectiveness. This paper describes off- and online processing implementations based on lower envelope for removing bell-shaped noise; they are compared with a 300-ms-median filter. Techniques were analyzed using two kinds of EOG data: those modeled from our own design, and real signals. Using a model signal allowed to compare filtered outputs with ideal data, so that it was possible to quantify processing precision to remove noise caused by blinks, overshoots, and general interferences. We analyzed the ability to delete blinks and overshoots, and waveform preservation. Our technique had a high capacity for reducing interference amplitudes (>97%), even exceeding median filter (MF) results. However, the MF obtained better waveform preservation, with a smaller dependence on fixation width. The proposed technique is better at deleting blinks and overshoots than the MF in model and real EOG signals.

  18. A high performance biometric signal and image processing method to reveal blood perfusion towards 3D oxygen saturation mapping

    NASA Astrophysics Data System (ADS)

    Imms, Ryan; Hu, Sijung; Azorin-Peris, Vicente; Trico, Michaël.; Summers, Ron

    2014-03-01

    Non-contact imaging photoplethysmography (PPG) is a recent development in the field of physiological data acquisition, currently undergoing a large amount of research to characterize and define the range of its capabilities. Contact-based PPG techniques have been broadly used in clinical scenarios for a number of years to obtain direct information about the degree of oxygen saturation for patients. With the advent of imaging techniques, there is strong potential to enable access to additional information such as multi-dimensional blood perfusion and saturation mapping. The further development of effective opto-physiological monitoring techniques is dependent upon novel modelling techniques coupled with improved sensor design and effective signal processing methodologies. The biometric signal and imaging processing platform (bSIPP) provides a comprehensive set of features for extraction and analysis of recorded iPPG data, enabling direct comparison with other biomedical diagnostic tools such as ECG and EEG. Additionally, utilizing information about the nature of tissue structure has enabled the generation of an engineering model describing the behaviour of light during its travel through the biological tissue. This enables the estimation of the relative oxygen saturation and blood perfusion in different layers of the tissue to be calculated, which has the potential to be a useful diagnostic tool.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierre, John W.; Wies, Richard; Trudnowski, Daniel

    Time-synchronized measurements provide rich information for estimating a power-system's electromechanical modal properties via advanced signal processing. This information is becoming critical for the improved operational reliability of interconnected grids. A given mode's properties are described by its frequency, damping, and shape. Modal frequencies and damping are useful indicators of power-system stress, usually declining with increased load or reduced grid capacity. Mode shape provides critical information for operational control actions. This project investigated many advanced techniques for power system identification from measured data focusing on mode frequency and damping ratio estimation. Investigators from the three universities coordinated their effort with Pacificmore » Northwest National Laboratory (PNNL). Significant progress was made on developing appropriate techniques for system identification with confidence intervals and testing those techniques on field measured data and through simulation. Experimental data from the western area power system was provided by PNNL and Bonneville Power Administration (BPA) for both ambient conditions and for signal injection tests. Three large-scale tests were conducted for the western area in 2005 and 2006. Measured field PMU (Phasor Measurement Unit) data was provided to the three universities. A 19-machine simulation model was enhanced for testing the system identification algorithms. Extensive simulations were run with this model to test the performance of the algorithms. University of Wyoming researchers participated in four primary activities: (1) Block and adaptive processing techniques for mode estimation from ambient signals and probing signals, (2) confidence interval estimation, (3) probing signal design and injection method analysis, and (4) performance assessment and validation from simulated and field measured data. Subspace based methods have been use to improve previous results from block processing techniques. Bootstrap techniques have been developed to estimate confidence intervals for the electromechanical modes from field measured data. Results were obtained using injected signal data provided by BPA. A new probing signal was designed that puts more strength into the signal for a given maximum peak to peak swing. Further simulations were conducted on a model based on measured data and with the modifications of the 19-machine simulation model. Montana Tech researchers participated in two primary activities: (1) continued development of the 19-machine simulation test system to include a DC line; and (2) extensive simulation analysis of the various system identification algorithms and bootstrap techniques using the 19 machine model. Researchers at the University of Alaska-Fairbanks focused on the development and testing of adaptive filter algorithms for mode estimation using data generated from simulation models and on data provided in collaboration with BPA and PNNL. There efforts consist of pre-processing field data, testing and refining adaptive filter techniques (specifically the Least Mean Squares (LMS), the Adaptive Step-size LMS (ASLMS), and Error Tracking (ET) algorithms). They also improved convergence of the adaptive algorithms by using an initial estimate from block processing AR method to initialize the weight vector for LMS. Extensive testing was performed on simulated data from the 19 machine model. This project was also extensively involved in the WECC (Western Electricity Coordinating Council) system wide tests carried out in 2005 and 2006. These tests involved injecting known probing signals into the western power grid. One of the primary goals of these tests was the reliable estimation of electromechanical mode properties from measured PMU data. Applied to the system were three types of probing inputs: (1) activation of the Chief Joseph Dynamic Brake, (2) mid-level probing at the Pacific DC Intertie (PDCI), and (3) low-level probing on the PDCI. The Chief Joseph Dynamic Brake is a 1400 MW disturbance to the system and is injected for a half of a second. For the mid and low-level probing, the Celilo terminal of the PDCI is modulated with a known probing signal. Similar but less extensive tests were conducted in June of 2000. The low-level probing signals were designed at the University of Wyoming. A number of important design factors are considered. The designed low-level probing signal used in the tests is a multi-sine signal. Its frequency content is focused in the range of the inter-area electromechanical modes. The most frequently used of these low-level multi-sine signals had a period of over two minutes, a root-mean-square (rms) value of 14 MW, and a peak magnitude of 20 MW. Up to 15 cycles of this probing signal were injected into the system resulting in a processing gain of 15. The resulting measured response at points throughout the system was not much larger than the ambient noise present in the measurements.« less

  20. Introduction to acoustic emission

    NASA Technical Reports Server (NTRS)

    Possa, G.

    1983-01-01

    Typical acoustic emission signal characteristics are described and techniques which localize the signal source by processing the acoustic delay data from multiple sensors are discussed. The instrumentation, which includes sensors, amplifiers, pulse counters, a minicomputer and output devices is examined. Applications are reviewed.

  1. Cancelation and its simulation using Matlab according to active noise control case study of automotive noise silencer

    NASA Astrophysics Data System (ADS)

    Alfisyahrin; Isranuri, I.

    2018-02-01

    Active Noise Control is a technique to overcome noisy with noise or sound countered with sound in scientific terminology i.e signal countered with signals. This technique can be used to dampen relevant noise in accordance with the wishes of the engineering task and reducing automotive muffler noise to a minimum. Objective of this study is to develop a Active Noise Control which should cancel the noise of automotive Exhaust (Silencer) through Signal Processing Simulation methods. Noise generator of Active Noise Control is to make the opponent signal amplitude and frequency of the automotive noise. The steps are: Firstly, the noise of automotive silencer was measured to characterize the automotive noise that its amplitude and frequency which intended to be expressed. The opposed sound which having similar character with the signal source should be generated by signal function. A comparison between the data which has been completed with simulation calculations Fourier transform field data is data that has been captured on the muffler (noise silencer) Toyota Kijang Capsule assembly 2009. MATLAB is used to simulate how the signal processing noise generated by exhaust (silencer) using FFT. This opponent is inverted phase signal from the signal source 180° conducted by Instruments of Signal Noise Generators. The process of noise cancelation examined through simulation using computer software simulation. The result is obtained that attenuation of sound (noise cancellation) has a difference of 33.7%. This value is obtained from the comparison of the value of the signal source and the signal value of the opponent. So it can be concluded that the noisy signal can be attenuated by 33.7%.

  2. A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.

    PubMed

    Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L

    2016-10-01

    Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.

  3. White-Light Optical Information Processing and Holography.

    DTIC Science & Technology

    1982-05-03

    artifact noise . I. wever, the deblurring spatial filter that we used were a narrow spectral band centered at 5154A green light. To compensate for the scaling...Processing, White-Light 11olographyv, Image Profcessing, Optical Signal Process inI, Image Subtraction, Image Deblurring . 70. A S’ R ACT (Continua on crow ad...optical processing technique, we had shown that the incoherent source techniques provides better image quality, and very low coherent artifact noise

  4. Pulse-echo probe of rock permeability near oil wells

    NASA Technical Reports Server (NTRS)

    Narasimhan, K. Y.; Parthasarathy, S. P.

    1978-01-01

    Processing method involves sequential insonifications of borehole wall at number of different frequencies. Return signals are normalized in amplitude, and root-mean-square (rms) value of each signal is determined. Values can be processed to yield information on size and number density of microfractures at various depths in rock matrix by using averaging methods developed for pulse-echo technique.

  5. Implementation and Performance of GaAs Digital Signal Processing ASICs

    NASA Technical Reports Server (NTRS)

    Whitaker, William D.; Buchanan, Jeffrey R.; Burke, Gary R.; Chow, Terrance W.; Graham, J. Scott; Kowalski, James E.; Lam, Barbara; Siavoshi, Fardad; Thompson, Matthew S.; Johnson, Robert A.

    1993-01-01

    The feasibility of performing high speed digital signal processing in GaAs gate array technology has been demonstrated with the successful implementation of a VLSI communications chip set for NASA's Deep Space Network. This paper describes the techniques developed to solve some of the technology and implementation problems associated with large scale integration of GaAs gate arrays.

  6. The heart sound preprocessor

    NASA Technical Reports Server (NTRS)

    Chen, W. T.

    1972-01-01

    Technology developed for signal and data processing was applied to diagnostic techniques in the area of phonocardiography (pcg), the graphic recording of the sounds of the heart generated by the functioning of the aortic and ventricular valves. The relatively broad bandwidth of the PCG signal (20 to 2000 Hz) was reduced to less than 100 Hz by the use of a heart sound envelope. The process involves full-wave rectification of the PCG signal, envelope detection of the rectified wave, and low pass filtering of the resultant envelope.

  7. Thermography-based blood flow imaging in human skin of the hands and feet: a spectral filtering approach.

    PubMed

    Sagaidachnyi, A A; Fomin, A V; Usanov, D A; Skripal, A V

    2017-02-01

    The determination of the relationship between skin blood flow and skin temperature dynamics is the main problem in thermography-based blood flow imaging. Oscillations in skin blood flow are the source of thermal waves propagating from micro-vessels toward the skin's surface, as assumed in this study. This hypothesis allows us to use equations for the attenuation and dispersion of thermal waves for converting the temperature signal into the blood flow signal, and vice versa. We developed a spectral filtering approach (SFA), which is a new technique for thermography-based blood flow imaging. In contrast to other processing techniques, the SFA implies calculations in the spectral domain rather than in the time domain. Therefore, it eliminates the need to solve differential equations. The developed technique was verified within 0.005-0.1 Hz, including the endothelial, neurogenic and myogenic frequency bands of blood flow oscillations. The algorithm for an inverse conversion of the blood flow signal into the skin temperature signal is addressed. The examples of blood flow imaging of hands during cuff occlusion and feet during heating of the back are illustrated. The processing of infrared (IR) thermograms using the SFA allowed us to restore the blood flow signals and achieve correlations of about 0.8 with a waveform of a photoplethysmographic signal. The prospective applications of the thermography-based blood flow imaging technique include non-contact monitoring of the blood supply during engraftment of skin flaps and burns healing, as well the use of contact temperature sensors to monitor low-frequency oscillations of peripheral blood flow.

  8. Psycho-physiological training approach for amputee rehabilitation.

    PubMed

    Dhal, Chandan; Wahi, Akshat

    2015-01-01

    Electromyography (EMG) signals are very noisy and difficult to acquire. Conventional techniques involve amplification and filtering through analog circuits, which makes the system very unstable. The surface EMG signals lie in the frequency range of 6Hz to 600Hz, and the dominant range is between the ranges from 20Hz to 150Hz. 1 Our project aimed to analyze an EMG signal effectively over its complete frequency range. To remove these defects, we designed what we think is an easy, effective, and reliable signal processing technique. We did spectrum analysis, so as to perform all the processing such as amplification, filtering, and thresholding on an Arduino Uno board, hence removing the need for analog amplifiers and filtering circuits, which have stability issues. The conversion of time domain to frequency domain of any signal gives a detailed data of the signal set. Our main aim is to use this useful data for an alternative methodology for rehabilitation called a psychophysiological approach to rehabilitation in prosthesis, which can reduce the cost of the myoelectric arm, as well as increase its efficiency. This method allows the user to gain control over their muscle sets in a less stressful environment. Further, we also have described how our approach is viable and can benefit the rehabilitation process. We used our DSP EMG signals to play an online game and showed how this approach can be used in rehabilitation.

  9. The application of digital signal processing techniques to a teleoperator radar system

    NASA Technical Reports Server (NTRS)

    Pujol, A.

    1982-01-01

    A digital signal processing system was studied for the determination of the spectral frequency distribution of echo signals from a teleoperator radar system. The system consisted of a sample and hold circuit, an analog to digital converter, a digital filter, and a Fast Fourier Transform. The system is interfaced to a 16 bit microprocessor. The microprocessor is programmed to control the complete digital signal processing. The digital filtering and Fast Fourier Transform functions are implemented by a S2815 digital filter/utility peripheral chip and a S2814A Fast Fourier Transform chip. The S2815 initially simulates a low-pass Butterworth filter with later expansion to complete filter circuit (bandpass and highpass) synthesizing.

  10. Real-time optical signal processors employing optical feedback: amplitude and phase control.

    PubMed

    Gallagher, N C

    1976-04-01

    The development of real-time coherent optical signal processors has increased the appeal of optical computing techniques in signal processing applications. A major limitation of these real-time systems is the. fact that the optical processing material is generally of a phase-only type. The result is that the spatial filters synthesized with these systems must be either phase-only filters or amplitude-only filters. The main concern of this paper is the application of optical feedback techniques to obtain simultaneous and independent amplitude and phase control of the light passing through the system. It is shown that optical feedback techniques may be employed with phase-only spatial filters to obtain this amplitude and phase control. The feedback system with phase-only filters is compared with other feedback systems that employ combinations of phase-only and amplitude-only filters; it is found that the phase-only system is substantially more flexible than the other two systems investigated.

  11. A Comparison of Inductive Sensors in the Characterization of Partial Discharges and Electrical Noise Using the Chromatic Technique

    PubMed Central

    Ardila-Rey, Jorge Alfredo; Montaña, Johny; Schurch, Roger; Covolan Ulson, José Alfredo; Bani, Nurul Aini

    2018-01-01

    Partial discharges (PDs) are one of the most important classes of ageing processes that occur within electrical insulation. PD detection is a standardized technique to qualify the state of the insulation in electric assets such as machines and power cables. Generally, the classical phase-resolved partial discharge (PRPD) patterns are used to perform the identification of the type of PD source when they are related to a specific degradation process and when the electrical noise level is low compared to the magnitudes of the PD signals. However, in practical applications such as measurements carried out in the field or in industrial environments, several PD sources and large noise signals are usually present simultaneously. In this study, three different inductive sensors have been used to evaluate and compare their performance in the detection and separation of multiple PD sources by applying the chromatic technique to each of the measured signals. PMID:29596337

  12. Digital processing of array seismic recordings

    USGS Publications Warehouse

    Ryall, Alan; Birtill, John

    1962-01-01

    This technical letter contains a brief review of the operations which are involved in digital processing of array seismic recordings by the methods of velocity filtering, summation, cross-multiplication and integration, and by combinations of these operations (the "UK Method" and multiple correlation). Examples are presented of analyses by the several techniques on array recordings which were obtained by the U.S. Geological Survey during chemical and nuclear explosions in the western United States. Seismograms are synthesized using actual noise and Pn-signal recordings, such that the signal-to-noise ratio, onset time and velocity of the signal are predetermined for the synthetic record. These records are then analyzed by summation, cross-multiplication, multiple correlation and the UK technique, and the results are compared. For all of the examples presented, analysis by the non-linear techniques of multiple correlation and cross-multiplication of the traces on an array recording are preferred to analyses by the linear operations involved in summation and the UK Method.

  13. Fault Detection of Roller-Bearings Using Signal Processing and Optimization Algorithms

    PubMed Central

    Kwak, Dae-Ho; Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2014-01-01

    This study presents a fault detection of roller bearings through signal processing and optimization techniques. After the occurrence of scratch-type defects on the inner race of bearings, variations of kurtosis values are investigated in terms of two different data processing techniques: minimum entropy deconvolution (MED), and the Teager-Kaiser Energy Operator (TKEO). MED and the TKEO are employed to qualitatively enhance the discrimination of defect-induced repeating peaks on bearing vibration data with measurement noise. Given the perspective of the execution sequence of MED and the TKEO, the study found that the kurtosis sensitivity towards a defect on bearings could be highly improved. Also, the vibration signal from both healthy and damaged bearings is decomposed into multiple intrinsic mode functions (IMFs), through empirical mode decomposition (EMD). The weight vectors of IMFs become design variables for a genetic algorithm (GA). The weights of each IMF can be optimized through the genetic algorithm, to enhance the sensitivity of kurtosis on damaged bearing signals. Experimental results show that the EMD-GA approach successfully improved the resolution of detectability between a roller bearing with defect, and an intact system. PMID:24368701

  14. Optical Frequency Upconversion Technique for Transmission of Wireless MIMO-Type Signals over Optical Fiber

    PubMed Central

    Shaddad, R. Q.; Mohammad, A. B.; Al-Gailani, S. A.; Al-Hetar, A. M.

    2014-01-01

    The optical fiber is well adapted to pass multiple wireless signals having different carrier frequencies by using radio-over-fiber (ROF) technique. However, multiple wireless signals which have the same carrier frequency cannot propagate over a single optical fiber, such as wireless multi-input multi-output (MIMO) signals feeding multiple antennas in the fiber wireless (FiWi) system. A novel optical frequency upconversion (OFU) technique is proposed to solve this problem. In this paper, the novel OFU approach is used to transmit three wireless MIMO signals over a 20 km standard single mode fiber (SMF). The OFU technique exploits one optical source to produce multiple wavelengths by delivering it to a LiNbO3 external optical modulator. The wireless MIMO signals are then modulated by LiNbO3 optical intensity modulators separately using the generated optical carriers from the OFU process. These modulators use the optical single-sideband with carrier (OSSB+C) modulation scheme to optimize the system performance against the fiber dispersion effect. Each wireless MIMO signal is with a 2.4 GHz or 5 GHz carrier frequency, 1 Gb/s data rate, and 16-quadrature amplitude modulation (QAM). The crosstalk between the wireless MIMO signals is highly suppressed, since each wireless MIMO signal is carried on a specific optical wavelength. PMID:24772009

  15. Audio signal analysis for tool wear monitoring in sheet metal stamping

    NASA Astrophysics Data System (ADS)

    Ubhayaratne, Indivarie; Pereira, Michael P.; Xiang, Yong; Rolfe, Bernard F.

    2017-02-01

    Stamping tool wear can significantly degrade product quality, and hence, online tool condition monitoring is a timely need in many manufacturing industries. Even though a large amount of research has been conducted employing different sensor signals, there is still an unmet demand for a low-cost easy to set up condition monitoring system. Audio signal analysis is a simple method that has the potential to meet this demand, but has not been previously used for stamping process monitoring. Hence, this paper studies the existence and the significance of the correlation between emitted sound signals and the wear state of sheet metal stamping tools. The corrupting sources generated by the tooling of the stamping press and surrounding machinery have higher amplitudes compared to that of the sound emitted by the stamping operation itself. Therefore, a newly developed semi-blind signal extraction technique was employed as a pre-processing technique to mitigate the contribution of these corrupting sources. The spectral analysis results of the raw and extracted signals demonstrate a significant qualitative relationship between wear progression and the emitted sound signature. This study lays the basis for employing low-cost audio signal analysis in the development of a real-time industrial tool condition monitoring system.

  16. Vibro-acoustic condition monitoring of Internal Combustion Engines: A critical review of existing techniques

    NASA Astrophysics Data System (ADS)

    Delvecchio, S.; Bonfiglio, P.; Pompoli, F.

    2018-01-01

    This paper deals with the state-of-the-art strategies and techniques based on vibro-acoustic signals that can monitor and diagnose malfunctions in Internal Combustion Engines (ICEs) under both test bench and vehicle operating conditions. Over recent years, several authors have summarized what is known in critical reviews mainly focused on reciprocating machines in general or on specific signal processing techniques: no attempts to deal with IC engine condition monitoring have been made. This paper first gives a brief summary of the generation of sound and vibration in ICEs in order to place further discussion on fault vibro-acoustic diagnosis in context. An overview of the monitoring and diagnostic techniques described in literature using both vibration and acoustic signals is also provided. Different faulty conditions are described which affect combustion, mechanics and the aerodynamics of ICEs. The importance of measuring acoustic signals, as opposed to vibration signals, is due since the former seem to be more suitable for implementation on on-board monitoring systems in view of their non-intrusive behaviour, capability in simultaneously capturing signatures from several mechanical components and because of the possibility of detecting faults affecting airborne transmission paths. In view of the recent needs of the industry to (-) optimize component structural durability adopting long-life cycles, (-) verify the engine final status at the end of the assembly line and (-) reduce the maintenance costs monitoring the ICE life during vehicle operations, monitoring and diagnosing system requests are continuously growing up. The present review can be considered a useful guideline for test engineers in understanding which types of fault can be diagnosed by using vibro-acoustic signals in sufficient time in both test bench and operating conditions and which transducer and signal processing technique (of which the essential background theory is here reported) could be considered the most reliable and informative to be implemented for the fault in question.

  17. Unveiling the signals from extremely noisy microseismic data for high-resolution hydraulic fracturing monitoring.

    PubMed

    Huang, Weilin; Wang, Runqiu; Li, Huijian; Chen, Yangkang

    2017-09-20

    Microseismic method is an essential technique for monitoring the dynamic status of hydraulic fracturing during the development of unconventional reservoirs. However, one of the challenges in microseismic monitoring is that those seismic signals generated from micro seismicity have extremely low amplitude. We develop a methodology to unveil the signals that are smeared in the strong ambient noise and thus facilitate a more accurate arrival-time picking that will ultimately improve the localization accuracy. In the proposed technique, we decompose the recorded data into several morphological multi-scale components. In order to unveil weak signal, we propose an orthogonalization operator which acts as a time-varying weighting in the morphological reconstruction. The orthogonalization operator is obtained using an inversion process. This orthogonalized morphological reconstruction can be interpreted as a projection of the higher-dimensional vector. We first test the proposed technique using a synthetic dataset. Then the proposed technique is applied to a field dataset recorded in a project in China, in which the signals induced from hydraulic fracturing are recorded by twelve three-component (3-C) geophones in a monitoring well. The result demonstrates that the orthogonalized morphological reconstruction can make the extremely weak microseismic signals detectable.

  18. Non-Intrusive Cable Tester

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Simpson, Howard J. (Inventor)

    1999-01-01

    A cable tester is described for low frequency testing of a cable for faults. The tester allows for testing a cable beyond a point where a signal conditioner is installed, minimizing the number of connections which have to be disconnected. A magnetic pickup coil is described for detecting a test signal injected into the cable. A narrow bandpass filter is described for increasing detection of the test signal. The bandpass filter reduces noise so that a high gain amplifier provided for detecting a test signal is not completely saturate by noise. To further increase the accuracy of the cable tester, processing gain is achieved by comparing the signal from the amplifier with at least one reference signal emulating the low frequency input signal injected into the cable. Different processing techniques are described evaluating a detected signal.

  19. Liquid Argon TPC Signal Formation, Signal Processing and Hit Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baller, Bruce

    2017-03-11

    This document describes the early stage of the reconstruction chain that was developed for the ArgoNeuT and MicroBooNE experiments at Fermilab. These experiments study accelerator neutrino interactions that occur in a Liquid Argon Time Projection Chamber. Reconstructing the properties of particles produced in these interactions requires knowledge of the micro-physics processes that affect the creation and transport of ionization electrons to the readout system. A wire signal deconvolution technique was developed to convert wire signals to a standard form for hit reconstruction, to remove artifacts in the electronics chain and to remove coherent noise.

  20. Improved Signal Processing Technique Leads to More Robust Self Diagnostic Accelerometer System

    NASA Technical Reports Server (NTRS)

    Tokars, Roger; Lekki, John; Jaros, Dave; Riggs, Terrence; Evans, Kenneth P.

    2010-01-01

    The self diagnostic accelerometer (SDA) is a sensor system designed to actively monitor the health of an accelerometer. In this case an accelerometer is considered healthy if it can be determined that it is operating correctly and its measurements may be relied upon. The SDA system accomplishes this by actively monitoring the accelerometer for a variety of failure conditions including accelerometer structural damage, an electrical open circuit, and most importantly accelerometer detachment. In recent testing of the SDA system in emulated engine operating conditions it has been found that a more robust signal processing technique was necessary. An improved accelerometer diagnostic technique and test results of the SDA system utilizing this technique are presented here. Furthermore, the real time, autonomous capability of the SDA system to concurrently compensate for effects from real operating conditions such as temperature changes and mechanical noise, while monitoring the condition of the accelerometer health and attachment, will be demonstrated.

  1. A review of signals used in sleep analysis

    PubMed Central

    Roebuck, A; Monasterio, V; Gederi, E; Osipov, M; Behar, J; Malhotra, A; Penzel, T; Clifford, GD

    2014-01-01

    This article presents a review of signals used for measuring physiology and activity during sleep and techniques for extracting information from these signals. We examine both clinical needs and biomedical signal processing approaches across a range of sensor types. Issues with recording and analysing the signals are discussed, together with their applicability to various clinical disorders. Both univariate and data fusion (exploiting the diverse characteristics of the primary recorded signals) approaches are discussed, together with a comparison of automated methods for analysing sleep. PMID:24346125

  2. Signal Processing for Determining Water Height in Steam Pipes with Dynamic Surface Conditions

    NASA Technical Reports Server (NTRS)

    Lih, Shyh-Shiuh; Lee, Hyeong Jae; Bar-Cohen, Yoseph

    2015-01-01

    An enhanced signal processing method based on the filtered Hilbert envelope of the auto-correlation function of the wave signal has been developed to monitor the height of condensed water through the steel wall of steam pipes with dynamic surface conditions. The developed signal processing algorithm can also be used to estimate the thickness of the pipe to determine the cut-off frequency for the low pass filter frequency of the Hilbert Envelope. Testing and analysis results by using the developed technique for dynamic surface conditions are presented. A multiple array of transducers setup and methodology are proposed for both the pulse-echo and pitch-catch signals to monitor the fluctuation of the water height due to disturbance, water flow, and other anomaly conditions.

  3. Profiling of poorly stratified smoky atmospheres with scanning lidar

    Treesearch

    Vladimir Kovalev; Cyle Wold; Alexander Petkov; Wei Min Hao

    2012-01-01

    The multiangle data processing technique is considered based on using the signal measured in zenith (or close to zenith) as a core source for extracting the information about the vertical atmospheric aerosol loading. The multiangle signals are used as the auxiliary data to extract the vertical transmittance profile from the zenith signal. Simulated and experimental...

  4. Agile waveforms for joint SAR-GMTI processing

    NASA Astrophysics Data System (ADS)

    Jaroszewski, Steven; Corbeil, Allan; McMurray, Stephen; Majumder, Uttam; Bell, Mark R.; Corbeil, Jeffrey; Minardi, Michael

    2016-05-01

    Wideband radar waveforms that employ spread-spectrum techniques were investigated and experimentally tested. The waveforms combine bi-phase coding with a traditional LFM chirp and are applicable to joint SAR-GMTI processing. After de-spreading, the received signals can be processed to support simultaneous GMTI and high resolution SAR imaging missions by airborne radars. The spread spectrum coding techniques can provide nearly orthogonal waveforms and offer enhanced operations in some environments by distributing the transmitted energy over a large instantaneous bandwidth. The LFM component offers the desired Doppler tolerance. In this paper, the waveforms are formulated and a shift-register approach for de-spreading the received signals is described. Hardware loop-back testing has shown the feasibility of using these waveforms in experimental radar test bed.

  5. Signal Processing Studies of a Simulated Laser Doppler Velocimetry-Based Acoustic Sensor

    DTIC Science & Technology

    1990-10-17

    investigated using spectral correlation methods. Results indicate that it may be possible to extend demonstrated LDV-based acoustic sensor sensitivities using higher order processing techniques. (Author)

  6. Ultrafast chirped optical waveform recording using referenced heterodyning and a time microscope

    DOEpatents

    Bennett, Corey Vincent

    2010-06-15

    A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. This invention expands upon previous work in temporal imaging by adding heterodyning, which can be self-referenced for improved precision and stability, to convert frequency chirp (the second derivative of phase with respect to time) into a time varying intensity modulation. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.

  7. Ultrafast chirped optical waveform recorder using referenced heterodyning and a time microscope

    DOEpatents

    Bennett, Corey Vincent [Livermore, CA

    2011-11-22

    A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. This invention expands upon previous work in temporal imaging by adding heterodyning, which can be self-referenced for improved precision and stability, to convert frequency chirp (the second derivative of phase with respect to time) into a time varying intensity modulation. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.

  8. Physiological correlates of mental workload

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.

    1980-01-01

    A literature review was conducted to assess the basis of and techniques for physiological assessment of mental workload. The study findings reviewed had shortcomings involving one or more of the following basic problems: (1) physiologic arousal can be easily driven by nonworkload factors, confounding any proposed metric; (2) the profound absence of underlying physiologic models has promulgated a multiplicity of seemingly arbitrary signal processing techniques; (3) the unspecified multidimensional nature of physiological "state" has given rise to a broad spectrum of competing noncommensurate metrics; and (4) the lack of an adequate definition of workload compels physiologic correlations to suffer either from the vagueness of implicit workload measures or from the variance of explicit subjective assessments. Using specific studies as examples, two basic signal processing/data reduction techniques in current use, time and ensemble averaging are discussed.

  9. EFQPSK Versus CERN: A Comparative Study

    NASA Technical Reports Server (NTRS)

    Borah, Deva K.; Horan, Stephen

    2001-01-01

    This report presents a comparative study on Enhanced Feher's Quadrature Phase Shift Keying (EFQPSK) and Constrained Envelope Root Nyquist (CERN) techniques. These two techniques have been developed in recent times to provide high spectral and power efficiencies under nonlinear amplifier environment. The purpose of this study is to gain insights into these techniques and to help system planners and designers with an appropriate set of guidelines for using these techniques. The comparative study presented in this report relies on effective simulation models and procedures. Therefore, a significant part of this report is devoted to understanding the mathematical and simulation models of the techniques and their set-up procedures. In particular, mathematical models of EFQPSK and CERN, effects of the sampling rate in discrete time signal representation, and modeling of nonlinear amplifiers and predistorters have been considered in detail. The results of this study show that both EFQPSK and CERN signals provide spectrally efficient communications compared to filtered conventional linear modulation techniques when a nonlinear power amplifier is used. However, there are important differences. The spectral efficiency of CERN signals, with a small amount of input backoff, is significantly better than that of EFQPSK signals if the nonlinear amplifier is an ideal clipper. However, to achieve such spectral efficiencies with a practical nonlinear amplifier, CERN processing requires a predistorter which effectively translates the amplifier's characteristics close to those of an ideal clipper. Thus, the spectral performance of CERN signals strongly depends on the predistorter. EFQPSK signals, on the other hand, do not need such predistorters since their spectra are almost unaffected by the nonlinear amplifier, Ibis report discusses several receiver structures for EFQPSK signals. It is observed that optimal receiver structures can be realized for both coded and uncoded EFQPSK signals with not too much increase in computational complexity. When a nonlinear amplifier is used, the bit error rate (BER) performance of the CERN signals with a matched filter receiver is found to be more than one decibel (dB) worse compared to the bit error performance of EFQPSK signals. Although channel coding is found to provide BER performance improvement for both EFQPSK and CERN signals, the performance of EFQPSK signals remains better than that of CERN. Optimal receiver structures for CERN signals with nonlinear equalization is left as a possible future work. Based on the numerical results, it is concluded that, in nonlinear channels, CERN processing leads towards better bandwidth efficiency with a compromise in power efficiency. Hence for bandwidth efficient communications needs, CERN is a good solution provided effective adaptive predistorters can be realized. On the other hand, EFQPSK signals provide a good power efficient solution with a compromise in band width efficiency.

  10. Hyperpolarized NMR: d-DNP, PHIP, and SABRE.

    PubMed

    Kovtunov, Kirill Viktorovich; Pokochueva, Ekaterina; Salnikov, Oleg; Cousin, Samuel; Kurzbach, Dennis; Vuichoud, Basile; Jannin, Sami; Chekmenev, Eduard; Goodson, Boyd; Barskiy, Danila; Koptyug, Igor

    2018-05-23

    NMR signals intensities can be enhanced by several orders of magnitude via utilization of techniques for hyperpolarization of different molecules, and it allows one to overcome the main sensitivity challenge of modern NMR/MRI techniques. Hyperpolarized fluids can be successfully used in different applications of material science and biomedicine. This focus review covers the fundamentals of the preparation of hyperpolarized liquids and gases via dissolution dynamic nuclear polarization (d-DNP) and parahydrogen-based techniques such as signal amplification by reversible exchange (SABRE) and parahydrogen-induced polarization (PHIP) in both heterogeneous and homogeneous processes. The different novel aspects of hyperpolarized fluids formation and utilization along with the possibility of NMR signal enhancement observation are described. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Remote photoacoustic detection of liquid contamination of a surface.

    PubMed

    Perrett, Brian; Harris, Michael; Pearson, Guy N; Willetts, David V; Pitter, Mark C

    2003-08-20

    A method for the remote detection and identification of liquid chemicals at ranges of tens of meters is presented. The technique uses pulsed indirect photoacoustic spectroscopy in the 10-microm wavelength region. Enhanced sensitivity is brought about by three main system developments: (1) increased laser-pulse energy (150 microJ/pulse), leading to increased strength of the generated photoacoustic signal; (2) increased microphone sensitivity and improved directionality by the use of a 60-cm-diameter parabolic dish; and (3) signal processing that allows improved discrimination of the signal from noise levels through prior knowledge of the pulse shape and pulse-repetition frequency. The practical aspects of applying the technique in a field environment are briefly examined, and possible applications of this technique are discussed.

  12. A Versatile Multichannel Digital Signal Processing Module for Microcalorimeter Arrays

    NASA Astrophysics Data System (ADS)

    Tan, H.; Collins, J. W.; Walby, M.; Hennig, W.; Warburton, W. K.; Grudberg, P.

    2012-06-01

    Different techniques have been developed for reading out microcalorimeter sensor arrays: individual outputs for small arrays, and time-division or frequency-division or code-division multiplexing for large arrays. Typically, raw waveform data are first read out from the arrays using one of these techniques and then stored on computer hard drives for offline optimum filtering, leading not only to requirements for large storage space but also limitations on achievable count rate. Thus, a read-out module that is capable of processing microcalorimeter signals in real time will be highly desirable. We have developed multichannel digital signal processing electronics that are capable of on-board, real time processing of microcalorimeter sensor signals from multiplexed or individual pixel arrays. It is a 3U PXI module consisting of a standardized core processor board and a set of daughter boards. Each daughter board is designed to interface a specific type of microcalorimeter array to the core processor. The combination of the standardized core plus this set of easily designed and modified daughter boards results in a versatile data acquisition module that not only can easily expand to future detector systems, but is also low cost. In this paper, we first present the core processor/daughter board architecture, and then report the performance of an 8-channel daughter board, which digitizes individual pixel outputs at 1 MSPS with 16-bit precision. We will also introduce a time-division multiplexing type daughter board, which takes in time-division multiplexing signals through fiber-optic cables and then processes the digital signals to generate energy spectra in real time.

  13. Data-derived symbol synchronization of MASK and QASK signals. [for multilevel digital communication systems

    NASA Technical Reports Server (NTRS)

    Simon, M. K.

    1974-01-01

    Multilevel amplitude-shift-keying (MASK) and quadrature amplitude-shift-keying (QASK) as signaling techniques for multilevel digital communications systems, and the problem of providing symbol synchronization in the receivers of such systems are discussed. A technique is presented for extracting symbol sync from an MASK or QASK signal. The scheme is a generalization of the data transition tracking loop used in PSK systems. The performance of the loop was analyzed in terms of its mean-squared jitter and its effects on the data detection process in MASK and QASK systems.

  14. A Minicomputer Based Scheme for Turbulence Measurements with Pulsed Doppler Ultrasound

    PubMed Central

    Craig, J. I.; Saxena, Vijay; Giddens, D. P.

    1979-01-01

    The present paper describes the design and performance of a digital-based Doppler signal processing system that is currently being used in hemodynamics research on arteriosclerosis. The major emphasis is on the development of the digital signal processing technique and its implementation in a small but powerful minicomputer. The work reported on here is part of a larger ongoing effort that the authors are undertaking to study the structure of turbulence in blood flow and its relation to arteriosclerosis. Some of the techniques and instruments developed are felt to have a broad applicability to fluid mechanics and especially to pipe flow fluid mechanics.

  15. Advanced Communication Processing Techniques

    NASA Astrophysics Data System (ADS)

    Scholtz, Robert A.

    This document contains the proceedings of the workshop Advanced Communication Processing Techniques, held May 14 to 17, 1989, near Ruidoso, New Mexico. Sponsored by the Army Research Office (under Contract DAAL03-89-G-0016) and organized by the Communication Sciences Institute of the University of Southern California, the workshop had as its objective to determine those applications of intelligent/adaptive communication signal processing that have been realized and to define areas of future research. We at the Communication Sciences Institute believe that there are two emerging areas which deserve considerably more study in the near future: (1) Modulation characterization, i.e., the automation of modulation format recognition so that a receiver can reliably demodulate a signal without using a priori information concerning the signal's structure, and (2) the incorporation of adaptive coding into communication links and networks. (Encoders and decoders which can operate with a wide variety of codes exist, but the way to utilize and control them in links and networks is an issue). To support these two new interest areas, one must have both a knowledge of (3) the kinds of channels and environments in which the systems must operate, and of (4) the latest adaptive equalization techniques which might be employed in these efforts.

  16. Beamforming array techniques for acoustic emission monitoring of large concrete structures

    NASA Astrophysics Data System (ADS)

    McLaskey, Gregory C.; Glaser, Steven D.; Grosse, Christian U.

    2010-06-01

    This paper introduces a novel method of acoustic emission (AE) analysis which is particularly suited for field applications on large plate-like reinforced concrete structures, such as walls and bridge decks. Similar to phased-array signal processing techniques developed for other non-destructive evaluation methods, this technique adapts beamforming tools developed for passive sonar and seismological applications for use in AE source localization and signal discrimination analyses. Instead of relying on the relatively weak P-wave, this method uses the energy-rich Rayleigh wave and requires only a small array of 4-8 sensors. Tests on an in-service reinforced concrete structure demonstrate that the azimuth of an artificial AE source can be determined via this method for sources located up to 3.8 m from the sensor array, even when the P-wave is undetectable. The beamforming array geometry also allows additional signal processing tools to be implemented, such as the VESPA process (VElocity SPectral Analysis), whereby the arrivals of different wave phases are identified by their apparent velocity of propagation. Beamforming AE can reduce sampling rate and time synchronization requirements between spatially distant sensors which in turn facilitates the use of wireless sensor networks for this application.

  17. Advances in studying phasic dopamine signaling in brain reward mechanisms

    PubMed Central

    Wickham, Robert J.; Solecki, Wojciech; Rathbun, Liza R.; Neugebauer, Nichole M.; Wightman, R. Mark; Addy, Nii A.

    2013-01-01

    The last sixty years of research have provided extraordinary advances of our knowledge of the reward system. Since its initial discovery as a neurotransmitter by Carlsson and colleagues (Carlsson et al., 1957), dopamine (DA) has emerged as an important mediator of reward processing. As a result, a number of electrochemical techniques have been developed to directly measure DA levels in the brain using various preparations. Many of these techniques and preparations differ in the types of questions that they can address. Together, these techniques have begun to elucidate the complex roles of tonic and phasic DA signaling in reward processing and in addiction. In this review, we will first provide a guide for the most commonly used electrochemical methods for DA detection and describe their utility in furthering our knowledge about DA's role in reward and addiction. Second, we will review the value of common in vitro and in vivo preparations and describe their ability to address different types of questions. Last, we will review recent data that has provided new insight of the mechanisms of in vivo phasic DA signaling and its role in reward processing and reward-mediated behavior. PMID:23747914

  18. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    PubMed

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  19. Quadrature demodulation based circuit implementation of pulse stream for ultrasonic signal FRI sparse sampling

    NASA Astrophysics Data System (ADS)

    Shoupeng, Song; Zhou, Jiang

    2017-03-01

    Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry.

  20. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  1. An enhanced multi-channel bacterial foraging optimization algorithm for MIMO communication system

    NASA Astrophysics Data System (ADS)

    Palanimuthu, Senthilkumar Jayalakshmi; Muthial, Chandrasekaran

    2017-04-01

    Channel estimation and optimisation are the main challenging tasks in Multi Input Multi Output (MIMO) wireless communication systems. In this work, a Multi-Channel Bacterial Foraging Optimization Algorithm approach is proposed for the selection of antenna in a transmission area. The main advantage of this method is, it reduces the loss of bandwidth during data transmission effectively. Here, we considered the channel estimation and optimisation for improving the transmission speed and reducing the unused bandwidth. Initially, the message is given to the input of the communication system. Then, the symbol mapping process is performed for converting the message into signals. It will be encoded based on the space-time encoding technique. Here, the single signal is divided into multiple signals and it will be given to the input of space-time precoder. Hence, the multiplexing is applied to transmission channel estimation. In this paper, the Rayleigh channel is selected based on the bandwidth range. This is the Gaussian distribution type channel. Then, the demultiplexing is applied on the obtained signal that is the reverse function of multiplexing, which splits the combined signal arriving from a medium into the original information signal. Furthermore, the long-term evolution technique is used for scheduling the time to channels during transmission. Here, the hidden Markov model technique is employed to predict the status information of the channel. Finally, the signals are decoded and the reconstructed signal is obtained after performing the scheduling process. The experimental results evaluate the performance of the proposed MIMO communication system in terms of bit error rate, mean squared error, average throughput, outage capacity and signal to interference noise ratio.

  2. BPSK Demodulation Using Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Garcia, Thomas R.

    1996-01-01

    A digital communications signal is a sinusoidal waveform that is modified by a binary (digital) information signal. The sinusoidal waveform is called the carrier. The carrier may be modified in amplitude, frequency, phase, or a combination of these. In this project a binary phase shift keyed (BPSK) signal is the communication signal. In a BPSK signal the phase of the carrier is set to one of two states, 180 degrees apart, by a binary (i.e., 1 or 0) information signal. A digital signal is a sampled version of a "real world" time continuous signal. The digital signal is generated by sampling the continuous signal at discrete points in time. The rate at which the signal is sampled is called the sampling rate (f(s)). The device that performs this operation is called an analog-to-digital (A/D) converter or a digitizer. The digital signal is composed of the sequence of individual values of the sampled BPSK signal. Digital signal processing (DSP) is the modification of the digital signal by mathematical operations. A device that performs this processing is called a digital signal processor. After processing, the digital signal may then be converted back to an analog signal using a digital-to-analog (D/A) converter. The goal of this project is to develop a system that will recover the digital information from a BPSK signal using DSP techniques. The project is broken down into the following steps: (1) Development of the algorithms required to demodulate the BPSK signal; (2) Simulation of the system; and (3) Implementation a BPSK receiver using digital signal processing hardware.

  3. Time-frequency analysis of pediatric murmurs

    NASA Astrophysics Data System (ADS)

    Lombardo, Joseph S.; Blodgett, Lisa A.; Rosen, Ron S.; Najmi, Amir-Homayoon; Thompson, W. Reid

    1998-05-01

    Technology has provided many new tools to assist in the diagnosis of pathologic conditions of the heart. Echocardiography, Ultrafast CT, and MRI are just a few. While these tools are a valuable resource, they are typically too expensive, large and complex in operation for use in rural, homecare, and physician's office settings. Recent advances in computer performance, miniaturization, and acoustic signal processing, have yielded new technologies that when applied to heart sounds can provide low cost screening for pathologic conditions. The short duration and transient nature of these signals requires processing techniques that provide high resolution in both time and frequency. Short-time Fourier transforms, Wigner distributions, and wavelet transforms have been applied to signals form hearts with various pathologic conditions. While no single technique provides the ideal solution, the combination of tools provides a good representation of the acoustic features of the pathologies selected.

  4. Techniques of EMG signal analysis: detection, processing, classification and applications

    PubMed Central

    Hussain, M.S.; Mohd-Yasin, F.

    2006-01-01

    Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694

  5. Refinement and application of acoustic impulse technique to study nozzle transmission characteristics

    NASA Technical Reports Server (NTRS)

    Salikuddin, M.; Brown, W. H.; Ramakrishnan, R.; Tanna, H. K.

    1983-01-01

    An improved acoustic impulse technique was developed and was used to study the transmission characteristics of duct/nozzle systems. To accomplish the above objective, various problems associated with the existing spark-discharge impulse technique were first studied. These included (1) the nonlinear behavior of high intensity pulses, (2) the contamination of the signal with flow noise, (3) low signal-to-noise ratio at high exhaust velocities, and (4) the inability to control or shape the signal generated by the source, specially when multiple spark points were used as the source. The first step to resolve these problems was the replacement of the spark-discharge source with electroacoustic driver(s). These included (1) synthesizing on acoustic impulse with acoustic driver(s) to control and shape the output signal, (2) time domain signal averaging to remove flow noise from the contaminated signal, (3) signal editing to remove unwanted portions of the time history, (4) spectral averaging, and (5) numerical smoothing. The acoustic power measurement technique was improved by taking multiple induct measurements and by a modal decomposition process to account for the contribution of higher order modes in the power computation. The improved acoustic impulse technique was then validated by comparing the results derived by an impedance tube method. The mechanism of acoustic power loss, that occurs when sound is transmitted through nozzle terminations, was investigated. Finally, the refined impulse technique was applied to obtain more accurate results for the acoustic transmission characteristics of a conical nozzle and a multi-lobe multi-tube supressor nozzle.

  6. Method of recording bioelectrical signals using a capacitive coupling

    NASA Astrophysics Data System (ADS)

    Simon, V. A.; Gerasimov, V. A.; Kostrin, D. K.; Selivanov, L. M.; Uhov, A. A.

    2017-11-01

    In this article a technique for the bioelectrical signals acquisition by means of the capacitive sensors is described. A feedback loop for the ultra-high impedance biasing of the input instrumentation amplifier, which provides receiving of the electrical cardiac signal (ECS) through a capacitive coupling, is proposed. The mains 50/60 Hz noise is suppressed by a narrow-band stop filter with an independent notch frequency and quality factor tuning. Filter output is attached to a ΣΔ analog-to-digital converter (ADC), which acquires the filtered signal with a 24-bit resolution. Signal processing board is connected through universal serial bus interface to a personal computer, where ECS in a digital form is recorded and processed.

  7. Emg Signal Analysis of Healthy and Neuropathic Individuals

    NASA Astrophysics Data System (ADS)

    Gupta, Ashutosh; Sayed, Tabassum; Garg, Ridhi; Shreyam, Richa

    2017-08-01

    Electromyography is a method to evaluate levels of muscle activity. When a muscle contracts, an action potential is generated and this circulates along the muscular fibers. In electromyography, electrodes are connected to the skin and the electrical activity of muscles is measured and graph is plotted. The surface EMG signals picked up during the muscular activity are interfaced with a system. The EMG signals from individual suffering from Neuropathy and healthy individual, so obtained, are processed and analyzed using signal processing techniques. This project includes the investigation and interpretation of EMG signals of healthy and Neuropathic individuals using MATLAB. The prospective use of this study is in developing the prosthetic device for the people with Neuropathic disability.

  8. Non-parametric PCM to ADM conversion. [Pulse Code to Adaptive Delta Modulation

    NASA Technical Reports Server (NTRS)

    Locicero, J. L.; Schilling, D. L.

    1977-01-01

    An all-digital technique to convert pulse code modulated (PCM) signals into adaptive delta modulation (ADM) format is presented. The converter developed is shown to be independent of the statistical parameters of the encoded signal and can be constructed with only standard digital hardware. The structure of the converter is simple enough to be fabricated on a large scale integrated circuit where the advantages of reliability and cost can be optimized. A concise evaluation of this PCM to ADM translation technique is presented and several converters are simulated on a digital computer. A family of performance curves is given which displays the signal-to-noise ratio for sinusoidal test signals subjected to the conversion process, as a function of input signal power for several ratios of ADM rate to Nyquist rate.

  9. Digital signal processing the Tevatron BPM signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cancelo, G.; James, E.; Wolbers, S.

    2005-05-01

    The Beam Position Monitor (TeV BPM) readout system at Fermilab's Tevatron has been updated and is currently being commissioned. The new BPMs use new analog and digital hardware to achieve better beam position measurement resolution. The new system reads signals from both ends of the existing directional stripline pickups to provide simultaneous proton and antiproton measurements. The signals provided by the two ends of the BPM pickups are processed by analog band-pass filters and sampled by 14-bit ADCs at 74.3MHz. A crucial part of this work has been the design of digital filters that process the signal. This paper describesmore » the digital processing and estimation techniques used to optimize the beam position measurement. The BPM electronics must operate in narrow-band and wide-band modes to enable measurements of closed-orbit and turn-by-turn positions. The filtering and timing conditions of the signals are tuned accordingly for the operational modes. The analysis and the optimized result for each mode are presented.« less

  10. Nonlinear Real-Time Optical Signal Processing

    DTIC Science & Technology

    1990-09-01

    pattern recognition. Additional work concerns the relationship of parallel computation paradigms to optical computing and halftone screen techniques...paradigms to optical computing and halftone screen techniques for implementing general nonlinear functions. 3\\ 2 Research Progress This section...Vol. 23, No. 8, pp. 34-57, 1986. 2.4 Nonlinear Optical Processing with Halftones : Degradation and Compen- sation Models This paper is concerned with

  11. Noncausal telemetry data recovery techniques

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Lee, R.; Mileant, A.; Hinedi, S.

    1995-01-01

    Cost efficiency is becoming a major driver in future space missions. Because of the constraints on total cost, including design, implementation, and operation, future spacecraft are limited in terms of their size power and complexity. Consequently, it is expected that future missions will operate on marginal space-to-ground communication links that, in turn, can pose an additional risk on the successful scientific data return of these missions. For low data-rate and low downlink-margin missions, the buffering of the telemetry signal for further signal processing to improve data return is a possible strategy; it has been adopted for the Galileo S-band mission. This article describes techniques used for postprocessing of buffered telemetry signal segments (called gaps) to recover data lost during acquisition and resynchronization. Two methods, one for a closed-loop and the other one for an open-loop configuration, are discussed in this article. Both of them can be used in either forward or backward processing of signal segments, depending on where a gap is specifically situated in a pass.

  12. Rounding Technique for High-Speed Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Wechsler, E. R.

    1983-01-01

    Arithmetic technique facilitates high-speed rounding of 2's complement binary data. Conventional rounding of 2's complement numbers presents problems in high-speed digital circuits. Proposed technique consists of truncating K + 1 bits then attaching bit in least significant position. Mean output error is zero, eliminating introducing voltage offset at input.

  13. Processing Techniques for Intelligibility Improvement to Speech with Co-Channel Interference.

    DTIC Science & Technology

    1983-09-01

    processing was found to be always less than in the original unprocessed co-channel sig- nali also as the length of the comb filter increased, the...7 D- i35 702 PROCESSING TECHNIQUES FOR INTELLIGIBILITY IMPRO EMENT 1𔃼.TO SPEECH WITH CO-C..(U) SIGNAL TECHNOLOGY INC GOLETACA B A HANSON ET AL SEP...11111111122 11111.25 1111 .4 111.6 MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU Of STANDARDS- 1963-A RA R.83-225 Set ,’ember 1983 PROCESSING

  14. Polynomial-interpolation algorithm for van der Pauw Hall measurement in a metal hydride film

    NASA Astrophysics Data System (ADS)

    Koon, D. W.; Ares, J. R.; Leardini, F.; Fernández, J. F.; Ferrer, I. J.

    2008-10-01

    We apply a four-term polynomial-interpolation extension of the van der Pauw Hall measurement technique to a 330 nm Mg-Pd bilayer during both absorption and desorption of hydrogen at room temperature. We show that standard versions of the van der Pauw DC Hall measurement technique produce an error of over 100% due to a drifting offset signal and can lead to unphysical interpretations of the physical processes occurring in this film. The four-term technique effectively removes this source of error, even when the offset signal is drifting by an amount larger than the Hall signal in the time interval between successive measurements. This technique can be used to increase the resolution of transport studies of any material in which the resistivity is rapidly changing, particularly when the material is changing from metallic to insulating behavior.

  15. Principal and independent component analysis of concomitant functional near infrared spectroscopy and magnetic resonance imaging data

    NASA Astrophysics Data System (ADS)

    Schelkanova, Irina; Toronov, Vladislav

    2011-07-01

    Although near infrared spectroscopy (NIRS) is now widely used both in emerging clinical techniques and in cognitive neuroscience, the development of the apparatuses and signal processing methods for these applications is still a hot research topic. The main unresolved problem in functional NIRS is the separation of functional signals from the contaminations by systemic and local physiological fluctuations. This problem was approached by using various signal processing methods, including blind signal separation techniques. In particular, principal component analysis (PCA) and independent component analysis (ICA) were applied to the data acquired at the same wavelength and at multiple sites on the human or animal heads during functional activation. These signal processing procedures resulted in a number of principal or independent components that could be attributed to functional activity but their physiological meaning remained unknown. On the other hand, the best physiological specificity is provided by broadband NIRS. Also, a comparison with functional magnetic resonance imaging (fMRI) allows determining the spatial origin of fNIRS signals. In this study we applied PCA and ICA to broadband NIRS data to distill the components correlating with the breath hold activation paradigm and compared them with the simultaneously acquired fMRI signals. Breath holding was used because it generates blood carbon dioxide (CO2) which increases the blood-oxygen-level-dependent (BOLD) signal as CO2 acts as a cerebral vasodilator. Vasodilation causes increased cerebral blood flow which washes deoxyhaemoglobin out of the cerebral capillary bed thus increasing both the cerebral blood volume and oxygenation. Although the original signals were quite diverse, we found very few different components which corresponded to fMRI signals at different locations in the brain and to different physiological chromophores.

  16. Parallel optimization of signal detection in active magnetospheric signal injection experiments

    NASA Astrophysics Data System (ADS)

    Gowanlock, Michael; Li, Justin D.; Rude, Cody M.; Pankratius, Victor

    2018-05-01

    Signal detection and extraction requires substantial manual parameter tuning at different stages in the processing pipeline. Time-series data depends on domain-specific signal properties, necessitating unique parameter selection for a given problem. The large potential search space makes this parameter selection process time-consuming and subject to variability. We introduce a technique to search and prune such parameter search spaces in parallel and select parameters for time series filters using breadth- and depth-first search strategies to increase the likelihood of detecting signals of interest in the field of magnetospheric physics. We focus on studying geomagnetic activity in the extremely and very low frequency ranges (ELF/VLF) using ELF/VLF transmissions from Siple Station, Antarctica, received at Québec, Canada. Our technique successfully detects amplified transmissions and achieves substantial speedup performance gains as compared to an exhaustive parameter search. We present examples where our algorithmic approach reduces the search from hundreds of seconds down to less than 1 s, with a ranked signal detection in the top 99th percentile, thus making it valuable for real-time monitoring. We also present empirical performance models quantifying the trade-off between the quality of signal recovered and the algorithm response time required for signal extraction. In the future, improved signal extraction in scenarios like the Siple experiment will enable better real-time diagnostics of conditions of the Earth's magnetosphere for monitoring space weather activity.

  17. Neurofeedback Training for BCI Control

    NASA Astrophysics Data System (ADS)

    Neuper, Christa; Pfurtscheller, Gert

    Brain-computer interface (BCI) systems detect changes in brain signals that reflect human intention, then translate these signals to control monitors or external devices (for a comprehensive review, see [1]). BCIs typically measure electrical signals resulting from neural firing (i.e. neuronal action potentials, Electroencephalogram (ECoG), or Electroencephalogram (EEG)). Sophisticated pattern recognition and classification algorithms convert neural activity into the required control signals. BCI research has focused heavily on developing powerful signal processing and machine learning techniques to accurately classify neural activity [2-4].

  18. Experimental photonic generation of chirped pulses using nonlinear dispersion-based incoherent processing.

    PubMed

    Rius, Manuel; Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José

    2015-05-18

    We experimentally demonstrate, for the first time, a chirped microwave pulses generator based on the processing of an incoherent optical signal by means of a nonlinear dispersive element. Different capabilities have been demonstrated such as the control of the time-bandwidth product and the frequency tuning increasing the flexibility of the generated waveform compared to coherent techniques. Moreover, the use of differential detection improves considerably the limitation over the signal-to-noise ratio related to incoherent processing.

  19. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1976-01-01

    A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

  20. Detection of delamination defects in CFRP materials using ultrasonic signal processing.

    PubMed

    Benammar, Abdessalem; Drai, Redouane; Guessoum, Abderrezak

    2008-12-01

    In this paper, signal processing techniques are tested for their ability to resolve echoes associated with delaminations in carbon fiber-reinforced polymer multi-layered composite materials (CFRP) detected by ultrasonic methods. These methods include split spectrum processing (SSP) and the expectation-maximization (EM) algorithm. A simulation study on defect detection was performed, and results were validated experimentally on CFRP with and without delamination defects taken from aircraft. Comparison of the methods for their ability to resolve echoes are made.

  1. Parallel Implementation of the Wideband DOA Algorithm on the IBM Cell BE Processor

    DTIC Science & Technology

    2010-05-01

    Abstract—The Multiple Signal Classification ( MUSIC ) algorithm is a powerful technique for determining the Direction of Arrival (DOA) of signals...Broadband Engine Processor (Cell BE). The process of adapting the serial based MUSIC algorithm to the Cell BE will be analyzed in terms of parallelism and...using Multiple Signal Classification MUSIC algorithm [4] • Computation of Focus matrix • Computation of number of sources • Separation of Signal

  2. Processing of Antenna-Array Signals on the Basis of the Interference Model Including a Rank-Deficient Correlation Matrix

    NASA Astrophysics Data System (ADS)

    Rodionov, A. A.; Turchin, V. I.

    2017-06-01

    We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.

  3. Seismic data fusion anomaly detection

    NASA Astrophysics Data System (ADS)

    Harrity, Kyle; Blasch, Erik; Alford, Mark; Ezekiel, Soundararajan; Ferris, David

    2014-06-01

    Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of "perspectives" or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.

  4. Solar loading thermography: Time-lapsed thermographic survey and advanced thermographic signal processing for the inspection of civil engineering and cultural heritage structures

    NASA Astrophysics Data System (ADS)

    Ibarra-Castanedo, Clemente; Sfarra, Stefano; Klein, Matthieu; Maldague, Xavier

    2017-05-01

    The experimental results from infrared thermography surveys over two buildings externally exposed walls are presented. Data acquisition was performed on a static configuration by recording direct and indirect solar loading during several days and was processed using advanced signal processing techniques in order to increase signal-to-noise ratio and signature contrast of the elements of interest. It is demonstrated that it is possible to detect the thermal signature of large internal structures as well as surface features under such thermographic scenarios. Results from a long-wave microbolometer compared favorably to those from a mid-wave cooled infrared camera for the detection of large subsurface features from unprocessed images. In both cases, however, advanced signal processing greatly improved contrast of the internal features.

  5. Improving the signal analysis for in vivo photoacoustic flow cytometry

    NASA Astrophysics Data System (ADS)

    Niu, Zhenyu; Yang, Ping; Wei, Dan; Tang, Shuo; Wei, Xunbin

    2015-03-01

    At early stage of cancer, a small number of circulating tumor cells (CTCs) appear in the blood circulation. Thus, early detection of malignant circulating tumor cells has great significance for timely treatment to reduce the cancer death rate. We have developed an in vivo photoacoustic flow cytometry (PAFC) to monitor the metastatic process of CTCs and record the signals from target cells. Information of target cells which is helpful to the early therapy would be obtained through analyzing and processing the signals. The raw signal detected from target cells often contains some noise caused by electronic devices, such as background noise and thermal noise. We choose the Wavelet denoising method to effectively distinguish the target signal from background noise. Processing in time domain and frequency domain would be combined to analyze the signal after denoising. This algorithm contains time domain filter and frequency transformation. The frequency spectrum image of the signal contains distinctive features that can be used to analyze the property of target cells or particles. The PAFC technique can detect signals from circulating tumor cells or other particles. The processing methods have a great potential for analyzing signals accurately and rapidly.

  6. Scintillating glasses for total absorption dual readout calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonvicini, V.; Driutti, A.; Cauz, D.

    2012-01-01

    Scintillating glasses are a potentially cheaper alternative to crystal - based calorimetry with common problems related to light collection, detection and processing. As such, their use and development are part of more extensive R&D aimed at investigating the potential of total absorption, combined with the readout (DR) technique, for hadron calorimetry. A recent series of measurements, using cosmic and particle beams from the Fermilab test beam facility and scintillating glass with the characteristics required for application of the DR technique, serve to illustrate the problems addressed and the progress achieved by this R&D. Alternative solutions for light collection (conventional andmore » silicon photomultipliers) and signal processing are compared, the separate contributions of scintillation and Cherenkov processes to the signal are evaluated and results are compared to simulation.« less

  7. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  8. Analysis of a crossed Bragg-cell acousto optical spectrometer for SETI

    NASA Technical Reports Server (NTRS)

    Gulkis, S.

    1986-01-01

    The search for radio signals from extraterrestrial intelligent (SETI) beings requires the use of large instantaneous bandwidth (500 MHz) and high resolution (20 Hz) spectrometers. Digital systems with a high degree of modularity can be used to provide this capability, and this method has been widely discussed. Another technique for meeting the SETI requirement is to use a crossed Bragg-cell spectrometer as described by Psaltis and Casasent (1979). This technique makes use of the Folded Spectrum concept, introduced by Thomas (1966). The Folded Spectrum is a two-dimensional Fourier Transform of a raster scanned one-dimensional signal. It is directly related to the long one-dimensional spectrum of the original signal and is ideally suited for optical signal processing.

  9. Analysis of a crossed Bragg-cell acousto optical spectrometer for SETI

    NASA Astrophysics Data System (ADS)

    Gulkis, S.

    1986-10-01

    The search for radio signals from extraterrestrial intelligent (SETI) beings requires the use of large instantaneous bandwidth (500 MHz) and high resolution (20 Hz) spectrometers. Digital systems with a high degree of modularity can be used to provide this capability, and this method has been widely discussed. Another technique for meeting the SETI requirement is to use a crossed Bragg-cell spectrometer as described by Psaltis and Casasent (1979). This technique makes use of the Folded Spectrum concept, introduced by Thomas (1966). The Folded Spectrum is a two-dimensional Fourier Transform of a raster scanned one-dimensional signal. It is directly related to the long one-dimensional spectrum of the original signal and is ideally suited for optical signal processing.

  10. Effect of Energy Equalization on the Intelligibility of Speech in Fluctuating Background Interference for Listeners With Hearing Impairment

    PubMed Central

    D’Aquila, Laura A.; Desloge, Joseph G.; Braida, Louis D.

    2017-01-01

    The masking release (MR; i.e., better speech recognition in fluctuating compared with continuous noise backgrounds) that is evident for listeners with normal hearing (NH) is generally reduced or absent for listeners with sensorineural hearing impairment (HI). In this study, a real-time signal-processing technique was developed to improve MR in listeners with HI and offer insight into the mechanisms influencing the size of MR. This technique compares short-term and long-term estimates of energy, increases the level of short-term segments whose energy is below the average energy, and normalizes the overall energy of the processed signal to be equivalent to that of the original long-term estimate. This signal-processing algorithm was used to create two types of energy-equalized (EEQ) signals: EEQ1, which operated on the wideband speech plus noise signal, and EEQ4, which operated independently on each of four bands with equal logarithmic width. Consonant identification was tested in backgrounds of continuous and various types of fluctuating speech-shaped Gaussian noise including those with both regularly and irregularly spaced temporal fluctuations. Listeners with HI achieved similar scores for EEQ and the original (unprocessed) stimuli in continuous-noise backgrounds, while superior performance was obtained for the EEQ signals in fluctuating background noises that had regular temporal gaps but not for those with irregularly spaced fluctuations. Thus, in noise backgrounds with regularly spaced temporal fluctuations, the energy-normalized signals led to larger values of MR and higher intelligibility than obtained with unprocessed signals. PMID:28602128

  11. Subranging technique using superconducting technology

    DOEpatents

    Gupta, Deepnarayan

    2003-01-01

    Subranging techniques using "digital SQUIDs" are used to design systems with large dynamic range, high resolution and large bandwidth. Analog-to-digital converters (ADCs) embodying the invention include a first SQUID based "coarse" resolution circuit and a second SQUID based "fine" resolution circuit to convert an analog input signal into "coarse" and "fine" digital signals for subsequent processing. In one embodiment, an ADC includes circuitry for supplying an analog input signal to an input coil having at least a first inductive section and a second inductive section. A first superconducting quantum interference device (SQUID) is coupled to the first inductive section and a second SQUID is coupled to the second inductive section. The first SQUID is designed to produce "coarse" (large amplitude, low resolution) output signals and the second SQUID is designed to produce "fine" (low amplitude, high resolution) output signals in response to the analog input signals.

  12. Optical-wireless-optical full link for polarization multiplexing quadrature amplitude/phase modulation signal transmission.

    PubMed

    Li, Xinying; Yu, Jianjun; Chi, Nan; Zhang, Junwen

    2013-11-15

    We propose and experimentally demonstrate an optical wireless integration system at the Q-band, in which up to 40 Gb/s polarization multiplexing multilevel quadrature amplitude/phase modulation (PM-QAM) signal can be first transmitted over 20 km single-mode fiber-28 (SMF-28), then delivered over a 2 m 2 × 2 multiple-input multiple-output wireless link, and finally transmitted over another 20 km SMF-28. The PM-QAM modulated wireless millimeter-wave (mm-wave) signal at 40 GHz is generated based on the remote heterodyning technique, and demodulated by the radio-frequency transparent photonic technique based on homodyne coherent detection and baseband digital signal processing. The classic constant modulus algorithm equalization is used at the receiver to realize polarization demultiplexing of the PM-QAM signal. For the first time, to the best of our knowledge, we realize the conversion of the PM-QAM modulated wireless mm-wave signal to the optical signal as well as 20 km fiber transmission of the converted optical signal.

  13. Developments and advances concerning the hyperpolarisation technique SABRE.

    PubMed

    Mewis, Ryan E

    2015-10-01

    To overcome the inherent sensitivity issue in NMR and MRI, hyperpolarisation techniques are used. Signal Amplification By Reversible Exchange (SABRE) is a hyperpolarisation technique that utilises parahydrogen, a molecule that possesses a nuclear singlet state, as the source of polarisation. A metal complex is required to break the singlet order of parahydrogen and, by doing so, facilitates polarisation transfer to analyte molecules ligated to the same complex through the J-coupled network that exists. The increased signal intensities that the analyte molecules possess as a result of this process have led to investigations whereby their potential as MRI contrast agents has been probed and to understand the fundamental processes underpinning the polarisation transfer mechanism. As well as discussing literature relevant to both of these areas, the chemical structure of the complex, the physical constraints of the polarisation transfer process and the successes of implementing SABRE at low and high magnetic fields are discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  14. A class of temporal boundaries derived by quantifying the sense of separation.

    PubMed

    Paine, Llewyn Elise; Gilden, David L

    2013-12-01

    The perception of moment-to-moment environmental flux as being composed of meaningful events requires that memory processes coordinate with cues that signify beginnings and endings. We have constructed a technique that allows this coordination to be monitored indirectly. This technique works by embedding a sequential priming task into the event under study. Memory and perception must be coordinated to resolve temporal flux into scenes. The implicit memory processes inherent in sequential priming are able to effectively shadow then mirror scene-forming processes. Certain temporal boundaries are found to weaken the strength of irrelevant feature priming, a signal which can then be used in more ambiguous cases to infer how people segment time. Over the course of 13 independent studies, we were able to calibrate the technique and then use it to measure the strength of event segmentation in several instructive contexts that involved both visual and auditory modalities. The signal generated by sequential priming may permit the sense of separation between events to be measured as an extensive psychophysical quantity.

  15. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    PubMed

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  16. Signal processing techniques for the U.S. Army Research Laboratory stepped frequency ultra-wideband radar

    NASA Astrophysics Data System (ADS)

    Nguyen, Lam

    2017-05-01

    The U.S. Army Research Laboratory (ARL) recently designed and tested a new prototype radar, the Spectrally Agile Frequency-Incrementing Reconfigurable (SAFIRE) radar system, based on a stepped-frequency architecture to address issues associated with our previous impulse-based radars. This is a low-frequency ultra-wideband (UWB) radar with frequencies spanning from 300 to 2000 MHz. Mounted on a vehicle, the radar can be configured in either sidelooking or forward-looking synthetic aperture radar (SAR) mode. We recently conducted our first experiment at Yuma Proving Grounds (YPG). This paper summarizes the radar configurations, parameters, and SAR geometry. The radar data and other noise sources, to include the self-interference signals and radio-frequency interference (RFI) noise sources, are presented and characterized in both the raw (pre-focus) and SAR imagery domains. This paper also describes our signal processing techniques for extracting noise from radar data, as well as the SAR imaging algorithms for forming SAR imagery in both forward- and side-looking modes. Finally, this paper demonstrates our spectral recovery technique and results for a radar operating in a spectrally restricted environment.

  17. siGnum: graphical user interface for EMG signal analysis.

    PubMed

    Kaur, Manvinder; Mathur, Shilpi; Bhatia, Dinesh; Verma, Suresh

    2015-01-01

    Electromyography (EMG) signals that represent the electrical activity of muscles can be used for various clinical and biomedical applications. These are complicated and highly varying signals that are dependent on anatomical location and physiological properties of the muscles. EMG signals acquired from the muscles require advanced methods for detection, decomposition and processing. This paper proposes a novel Graphical User Interface (GUI) siGnum developed in MATLAB that will apply efficient and effective techniques on processing of the raw EMG signals and decompose it in a simpler manner. It could be used independent of MATLAB software by employing a deploy tool. This would enable researcher's to gain good understanding of EMG signal and its analysis procedures that can be utilized for more powerful, flexible and efficient applications in near future.

  18. Cognitive measure on different profiles.

    PubMed

    Spindola, Marilda; Carra, Giovani; Balbinot, Alexandre; Zaro, Milton A

    2010-01-01

    Based on neurology and cognitive science many studies are developed to understand the human model mental, getting to know how human cognition works, especially about learning processes that involve complex contents and spatial-logical reasoning. Event Related Potential - ERP - is a basic and non-invasive method of electrophysiological investigation. It can be used to assess aspects of human cognitive processing by changing the rhythm of the frequency bands brain indicate that some type of processing or neuronal behavior. This paper focuses on ERP technique to help understand cognitive pathway in subjects from different areas of knowledge when they are exposed to an external visual stimulus. In the experiment we used 2D and 3D visual stimulus in the same picture. The signals were captured using 10 (ten) Electroencephalogram - EEG - channel system developed for this project and interfaced in a ADC (Analogical Digital System) board with LabVIEW system - National Instruments. That research was performed using project of experiments technique - DOE. The signal processing were done (math and statistical techniques) showing the relationship between cognitive pathway by groups and intergroups.

  19. AnyWave: a cross-platform and modular software for visualizing and processing electrophysiological signals.

    PubMed

    Colombet, B; Woodman, M; Badier, J M; Bénar, C G

    2015-03-15

    The importance of digital signal processing in clinical neurophysiology is growing steadily, involving clinical researchers and methodologists. There is a need for crossing the gap between these communities by providing efficient delivery of newly designed algorithms to end users. We have developed such a tool which both visualizes and processes data and, additionally, acts as a software development platform. AnyWave was designed to run on all common operating systems. It provides access to a variety of data formats and it employs high fidelity visualization techniques. It also allows using external tools as plug-ins, which can be developed in languages including C++, MATLAB and Python. In the current version, plug-ins allow computation of connectivity graphs (non-linear correlation h2) and time-frequency representation (Morlet wavelets). The software is freely available under the LGPL3 license. AnyWave is designed as an open, highly extensible solution, with an architecture that permits rapid delivery of new techniques to end users. We have developed AnyWave software as an efficient neurophysiological data visualizer able to integrate state of the art techniques. AnyWave offers an interface well suited to the needs of clinical research and an architecture designed for integrating new tools. We expect this software to strengthen the collaboration between clinical neurophysiologists and researchers in biomedical engineering and signal processing. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Topics in the Detection of Gravitational Waves from Compact Binary Inspirals

    NASA Astrophysics Data System (ADS)

    Kapadia, Shasvath Jagat

    Orbiting compact binaries - such as binary black holes, binary neutron stars and neutron star-black hole binaries - are among the most promising sources of gravitational waves observable by ground-based interferometric detectors. Despite numerous sophisticated engineering techniques, the gravitational wave signals will be buried deep within noise generated by various instrumental and environmental processes, and need to be extracted via a signal processing technique referred to as matched filtering. Matched filtering requires large banks of signal templates that are faithful representations of the true gravitational waveforms produced by astrophysical binaries. The accurate and efficient production of templates is thus crucial to the success of signal processing and data analysis. To that end, the dissertation presents a numerical technique that calibrates existing analytical (Post-Newtonian) waveforms, which are relatively inexpensive, to more accurate fiducial waveforms that are computationally expensive to generate. The resulting waveform family is significantly more accurate than the analytical waveforms, without incurring additional computational costs of production. Certain kinds of transient background noise artefacts, called "glitches'', can masquerade as gravitational wave signals for short durations and throw-off the matched-filter algorithm. Identifying glitches from true gravitational wave signals is a highly non-trivial exercise in data analysis which has been attempted with varying degrees of success. We present here a machine-learning based approach that exploits the various attributes of glitches and signals within detector data to provide a classification scheme that is a significant improvement over previous methods. The dissertation concludes by investigating the possibility of detecting a non-linear DC imprint, called the Christodoulou memory, produced in the arms of ground-based interferometers by the recently detected gravitational waves. The memory, which is even smaller in amplitude than the primary (detected) gravitational waves, will almost certainly not be seen in the current detection event. Nevertheless, future space-based detectors will likely be sensitive enough to observe the memory.

  1. Development of Advanced Signal Processing and Source Imaging Methods for Superparamagnetic Relaxometry

    PubMed Central

    Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.

    2017-01-01

    Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579

  2. Processing Functional Near Infrared Spectroscopy Signal with a Kalman Filter to Assess Working Memory during Simulated Flight.

    PubMed

    Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric

    2015-01-01

    Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased the performance of the classification of WM levels based on brain signal. The results suggest that Kalman filter is a suitable approach for real-time improvement of near infrared spectroscopy signal in ecological situations and the development of BCI.

  3. Processing Functional Near Infrared Spectroscopy Signal with a Kalman Filter to Assess Working Memory during Simulated Flight

    PubMed Central

    Durantin, Gautier; Scannella, Sébastien; Gateau, Thibault; Delorme, Arnaud; Dehais, Frédéric

    2016-01-01

    Working memory (WM) is a key executive function for operating aircraft, especially when pilots have to recall series of air traffic control instructions. There is a need to implement tools to monitor WM as its limitation may jeopardize flight safety. An innovative way to address this issue is to adopt a Neuroergonomics approach that merges knowledge and methods from Human Factors, System Engineering, and Neuroscience. A challenge of great importance for Neuroergonomics is to implement efficient brain imaging techniques to measure the brain at work and to design Brain Computer Interfaces (BCI). We used functional near infrared spectroscopy as it has been already successfully tested to measure WM capacity in complex environment with air traffic controllers (ATC), pilots, or unmanned vehicle operators. However, the extraction of relevant features from the raw signal in ecological environment is still a critical issue due to the complexity of implementing real-time signal processing techniques without a priori knowledge. We proposed to implement the Kalman filtering approach, a signal processing technique that is efficient when the dynamics of the signal can be modeled. We based our approach on the Boynton model of hemodynamic response. We conducted a first experiment with nine participants involving a basic WM task to estimate the noise covariances of the Kalman filter. We then conducted a more ecological experiment in our flight simulator with 18 pilots who interacted with ATC instructions (two levels of difficulty). The data was processed with the same Kalman filter settings implemented in the first experiment. This filter was benchmarked with a classical pass-band IIR filter and a Moving Average Convergence Divergence (MACD) filter. Statistical analysis revealed that the Kalman filter was the most efficient to separate the two levels of load, by increasing the observed effect size in prefrontal areas involved in WM. In addition, the use of a Kalman filter increased the performance of the classification of WM levels based on brain signal. The results suggest that Kalman filter is a suitable approach for real-time improvement of near infrared spectroscopy signal in ecological situations and the development of BCI. PMID:26834607

  4. A new methodology for vibration error compensation of optical encoders.

    PubMed

    Lopez, Jesus; Artes, Mariano

    2012-01-01

    Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.

  5. Information integration and diagnosis analysis of equipment status and production quality for machining process

    NASA Astrophysics Data System (ADS)

    Zan, Tao; Wang, Min; Hu, Jianzhong

    2010-12-01

    Machining status monitoring technique by multi-sensors can acquire and analyze the machining process information to implement abnormity diagnosis and fault warning. Statistical quality control technique is normally used to distinguish abnormal fluctuations from normal fluctuations through statistical method. In this paper by comparing the advantages and disadvantages of the two methods, the necessity and feasibility of integration and fusion is introduced. Then an approach that integrates multi-sensors status monitoring and statistical process control based on artificial intelligent technique, internet technique and database technique is brought forward. Based on virtual instrument technique the author developed the machining quality assurance system - MoniSysOnline, which has been used to monitoring the grinding machining process. By analyzing the quality data and AE signal information of wheel dressing process the reason of machining quality fluctuation has been obtained. The experiment result indicates that the approach is suitable for the status monitoring and analyzing of machining process.

  6. Materials, devices, techniques, and applications for Z-plane focal plane array technology II; Proceedings of the Meeting, San Diego, CA, July 12, 13, 1990

    NASA Astrophysics Data System (ADS)

    Carson, John C.

    1990-11-01

    Various papers on materials, devices, techniques, and applications for X-plane focal plane array technology are presented. Individual topics addressed include: application of Z-plane technology to the remote sensing of the earth from GEO, applications of smart neuromorphic focal planes, image-processing of Z-plane technology, neural network Z-plane implementation with very high interconnection rates, using a small IR surveillance satellite for tactical applications, establishing requirements for homing applications, Z-plane technology. Also discussed are: on-array spike suppression signal processing, algorithms for on-focal-plane gamma circumvention and time-delay integration, current HYMOSS Z-technology, packaging of electrons for on- and off-FPA signal processing, space/performance qualification of tape automated bonded devices, automation in tape automated bonding, high-speed/high-volume radiometric testing of Z-technology focal planes, 128-layer HYMOSS-module fabrication issues, automation of IRFPA production processes.

  7. : Signal Decomposition of High Resolution Time Series River data to Separate Local and Regional Components of Conductivity

    EPA Science Inventory

    Signal processing techniques were applied to high-resolution time series data obtained from conductivity loggers placed upstream and downstream of a wastewater treatment facility along a river. Data was collected over 14-60 days, and several seasons. The power spectral densit...

  8. Detection of cavitation vortex in hydraulic turbines using acoustic techniques

    NASA Astrophysics Data System (ADS)

    Candel, I.; Bunea, F.; Dunca, G.; Bucur, D. M.; Ioana, C.; Reeb, B.; Ciocan, G. D.

    2014-03-01

    Cavitation phenomena are known for their destructive capacity in hydraulic machineries and are caused by the pressure decrease followed by an implosion when the cavitation bubbles find an adverse pressure gradient. A helical vortex appears in the turbine diffuser cone at partial flow rate operation and can be cavitating in its core. Cavity volumes and vortex frequencies vary with the under-pressure level. If the vortex frequency comes close to one of the eigen frequencies of the turbine, a resonance phenomenon may occur, the unsteady fluctuations can be amplified and lead to important turbine and hydraulic circuit damage. Conventional cavitation vortex detection techniques are based on passive devices (pressure sensors or accelerometers). Limited sensor bandwidths and low frequency response limit the vortex detection and characterization information provided by the passive techniques. In order to go beyond these techniques and develop a new active one that will remove these drawbacks, previous work in the field has shown that techniques based on acoustic signals using adapted signal content to a particular hydraulic situation, can be more robust and accurate. The cavitation vortex effects in the water flow profile downstream hydraulic turbines runner are responsible for signal content modifications. Basic signal techniques use narrow band signals traveling inside the flow from an emitting transducer to a receiving one (active sensors). Emissions of wide band signals in the flow during the apparition and development of the vortex embeds changes in the received signals. Signal processing methods are used to estimate the cavitation apparition and evolution. Tests done in a reduced scale facility showed that due to the increasing flow rate, the signal -- vortex interaction is seen as modifications on the received signal's high order statistics and bandwidth. Wide band acoustic transducers have a higher dynamic range over mechanical elements; the system's reaction time is reduced, resulting in a faster detection of the unwanted effects. The paper will present an example of this new investigation technique on a vortex generator in the test facility that belongs to ICPE- CA.

  9. Statistical optimisation techniques in fatigue signal editing problem

    NASA Astrophysics Data System (ADS)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.

    2015-02-01

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.

  10. Statistical optimisation techniques in fatigue signal editing problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less

  11. Computed tomography of x-ray images using neural networks

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.; Jones, Martin H.; Sheats, Matthew J.; Davis, Anthony W.

    2000-03-01

    Traditional CT reconstruction is done using the technique of Filtered Backprojection. While this technique is widely employed in industrial and medical applications, it is not generally understood that FB has a fundamental flaw. Gibbs phenomena states any Fourier reconstruction will produce errors in the vicinity of all discontinuities, and that the error will equal 28 percent of the discontinuity. A number of years back, one of the authors proposed a biological perception model whereby biological neural networks perceive 3D images from stereo vision. The perception model proports an internal hard-wired neural network which emulates the external physical process. A process is repeated whereby erroneous unknown internal values are used to generate an emulated signal with is compared to external sensed data, generating an error signal. Feedback from the error signal is then sued to update the erroneous internal values. The process is repeated until the error signal no longer decrease. It was soon realized that the same method could be used to obtain CT from x-rays without having to do Fourier transforms. Neural networks have the additional potential for handling non-linearities and missing data. The technique has been applied to some coral images, collected at the Los Alamos high-energy x-ray facility. The initial images show considerable promise, in some instances showing more detail than the FB images obtained from the same data. Although routine production using this new method would require a massively parallel computer, the method shows promise, especially where refined detail is required.

  12. Split-spectrum processing technique for SNR enhancement of ultrasonic guided wave.

    PubMed

    Pedram, Seyed Kamran; Fateri, Sina; Gan, Lu; Haig, Alex; Thornicroft, Keith

    2018-02-01

    Ultrasonic guided wave (UGW) systems are broadly used in several branches of industry where the structural integrity is of concern. In those systems, signal interpretation can often be challenging due to the multi-modal and dispersive propagation of UGWs. This results in degradation of the signals in terms of signal-to-noise ratio (SNR) and spatial resolution. This paper employs the split-spectrum processing (SSP) technique in order to enhance the SNR and spatial resolution of UGW signals using the optimized filter bank parameters in real time scenario for pipe inspection. SSP technique has already been developed for other applications such as conventional ultrasonic testing for SNR enhancement. In this work, an investigation is provided to clarify the sensitivity of SSP performance to the filter bank parameter values for UGWs such as processing bandwidth, filter bandwidth, filter separation and a number of filters. As a result, the optimum values are estimated to significantly improve the SNR and spatial resolution of UGWs. The proposed method is synthetically and experimentally compared with conventional approaches employing different SSP recombination algorithms. The Polarity Thresholding (PT) and PT with Minimization (PTM) algorithms were found to be the best recombination algorithms. They substantially improved the SNR up to 36.9dB and 38.9dB respectively. The outcome of the work presented in this paper paves the way to enhance the reliability of UGW inspections. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Advanced Signal Processing Analysis of Laser-Induced Breakdown Spectroscopy Data for the Discrimination of Obsidian Sources

    DTIC Science & Technology

    2012-02-09

    different sources [12,13], but the analytical techniques needed for such analysis (XRD, INAA , & ICP-MS) are time consuming and require expensive...partial least-squares discriminant analysis (PLSDA) that used the SIMPLS solving method [33]. In the experi- ment design, a leave-one-sample-out (LOSO) para...REPORT Advanced signal processing analysis of laser-induced breakdown spectroscopy data for the discrimination of obsidian sources 14. ABSTRACT 16

  14. Microwave bale moisture sensing: Field trial continued

    USDA-ARS?s Scientific Manuscript database

    A microwave moisture measurement technique was developed at the USDA, ARS Cotton Production and Processing Research Unit for moisture sensing of cotton bales after the bale press. The technique measures the propagation delay of a microwave signal that is transmitted through the cotton bale. This res...

  15. Instrumentation and signal processing for the detection of heavy water using off axis-integrated cavity output spectroscopy technique

    NASA Astrophysics Data System (ADS)

    Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.

    2018-02-01

    An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.

  16. Directional dual-tree complex wavelet packet transforms for processing quadrature signals.

    PubMed

    Serbes, Gorkem; Gulcur, Halil Ozcan; Aydin, Nizamettin

    2016-03-01

    Quadrature signals containing in-phase and quadrature-phase components are used in many signal processing applications in every field of science and engineering. Specifically, Doppler ultrasound systems used to evaluate cardiovascular disorders noninvasively also result in quadrature format signals. In order to obtain directional blood flow information, the quadrature outputs have to be preprocessed using methods such as asymmetrical and symmetrical phasing filter techniques. These resultant directional signals can be employed in order to detect asymptomatic embolic signals caused by small emboli, which are indicators of a possible future stroke, in the cerebral circulation. Various transform-based methods such as Fourier and wavelet were frequently used in processing embolic signals. However, most of the times, the Fourier and discrete wavelet transforms are not appropriate for the analysis of embolic signals due to their non-stationary time-frequency behavior. Alternatively, discrete wavelet packet transform can perform an adaptive decomposition of the time-frequency axis. In this study, directional discrete wavelet packet transforms, which have the ability to map directional information while processing quadrature signals and have less computational complexity than the existing wavelet packet-based methods, are introduced. The performances of proposed methods are examined in detail by using single-frequency, synthetic narrow-band, and embolic quadrature signals.

  17. Channel modeling, signal processing and coding for perpendicular magnetic recording

    NASA Astrophysics Data System (ADS)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.

  18. Surface-geophysical techniques used to detect existing and infilled scour holes near bridge piers

    USGS Publications Warehouse

    Placzek, Gary; Haeni, F.P.

    1995-01-01

    Surface-geophysical techniques were used with a position-recording system to study riverbed scour near bridge piers. From May 1989 to May 1993. Fathometers, fixed- and swept-frequency con- tinuous seismic-reflection profiling (CSP) systems, and a ground-penetrating radar (GPR) system were used with a laser-positioning system to measure the depth and extent of existing and infilled scour holes near bridge piers. Equipment was purchased commercially and modified when necessary to interface the components and (or) to improve their performance. Three 200-kHz black-and-white chart- recording Fathometers produced profiles of the riverbed that included existing scour holes and exposed pier footings. The Fathometers were used in conjunction with other geophysical techniques to help interpret the geophysical data. A 20-kHz color Fathometer delineated scour-hole geometry and, in some cases, the thickness of fill material in the hole. The signal provided subbottom information as deep as 10 ft in fine-grained materials and resolved layers of fill material as thin as 1 foot thick. Fixed-frequency and swept-frequency CSP systems were evaluated. The fixed-frequency system used a 3.5-, 7.0-, or 14-kHz signal. The 3.5-kHz signal pene- trated up to 50 ft of fine-grained material and resolved layers as thin as 2.5-ft thick. The 14-kHz signal penetrated up to 20 ft of fine-grained material and resolved layers as thin as 1-ft thick. The swept-frequency systems used a signal that swept from 2- to 16-kHz. With this system, up to 50 ft of penetration was achieved, and fill material as thin as 1 ft was resolved. Scour-hole geometry, exposed pier footings, and fill thickness in scour holes were detected with both CSP systems. The GPR system used an 80-, 100-, or 300-megahertz signal. The technique produced records in water up to 15 ft deep that had a specific conductance less than 200x11ms/cm. The 100-MHz signal penetrated up to 40 ft of resistive granular material and resolved layers as thin as 2-ft thick. Scour-hole geometry, the thickness of fill material in scour holes, and riverbed deposition were detected using this technique. Processing techniques were applied after data collection to assist with the interpretation of the data. Data were transferred from the color Fathometer, CSP, and GPR systems to a personal computer, and a commercially available software package designed to process GPR data was used to process the GPR and CSP data. Digital filtering, predictive-deconvolution, and migration algorithms were applied to some of the data. The processed data were displayed and printed as color amplitude or wiggle-trace plots. These processing methods eased and improved the interpretation of some of the data, but some interference from side echoes from bridge piers and multiple reflections remained in the data. The surface-geophysical techniques were applied at six bridge sites in Connecticut. Each site had different water depths, specific conductance, and riverbed materials. Existing and infilled scour holes, exposed pier footings, and riverbed deposition were detected by the surveys. The interpretations of the geophysical data were confirmed by comparing the data with lithologic and (or) probing data.

  19. The feasibility of the auto tuning respiratory compensation system with ultrasonic image tracking technique.

    PubMed

    Chuang, Ho-Chiao; Hsu, Hsiao-Yu; Nieh, Shu-Kan; Tien, Der-Chi

    2015-01-01

    The purpose of this study is to assess the feasibility of using the analytical technique of ultrasound images in combination with an auto tumor localization system. During respiration, the activity of breathing in and out causes organs displacement at the lower lobe of the lung, and the maximum displacement range happens in the Superior-Inferior (SI) direction. Therefore, in this study all the tumor positioning is in SI direction under respiratory compensation, in which the compensations are carried out to the organs at the lower lobe and adjacent to the lower lobe of lung.In this research, due to the processes of ultrasound imaging generation, image analysis and signal transmission, when the captured respiratory signals are sent to auto tumor localization system, there was a signal time delay. The total delay time of the entire signal transmission process was 0.254 ± 0.023 seconds (with the lowest standard deviation) after implementing a series of analyses. To compensate for this signal delay time (0.254 ± 0.023 sec), a phase lead compensator (PLC) was designed and built into the auto tumor localization system. By analyzing the impact of the delay time and the respiratory waveforms under different frequencies on the phase lead compensator, an overall system delay time can be configured. Results showed as the respiratory frequency increased, variable value ``a'' and the subsequent gain ``k'' in the controller becomes larger. Moreover, value ``a'' and ``k'' increased as the system delay time increased when the respiratory frequency was fixed. The relationship of value ``a'' and ``k'' to the respiratory frequency can be obtained by using the curve fitting method to compensate for the respiratory motion for tumor localization. Through the comparison of the uncompensated signal and the compensated signal performed by the auto tumor localization system on the simulated respiratory signal, the feasibility of using ultrasound image analysis technology combined with the developed auto tumor localization system can be evaluated. The results show that the simulated respiratory signals under different frequencies of 0.5, 0.333, 0.25, 0.2 and 0.167 Hz with phase lead compensators were improved and stabilized. The compensation rate increased to the range of 7.04$∼ $18.82%, and the final compensation rate is about 97%. Therefore the auto tumor localization system combined with the ultrasound image analysis techniques is feasible.In this study, the developed ultrasound image analysis techniques combined into the auto tumor localization system has the following four advantages: (1) It is a non-invasive way (ultrasonic images) to monitor the entire compensating process of the active respiration instead of using a C-arm (invasive) to observe the organs motion. (2) During radiation therapy, the whole treatment process can be continuous, which can save the overall treatment time. (3) It is an independent system, which can be mounted onto any treatment couch. (4) Users can operate this system easily without the need of prior complicated training process.

  20. Advanced ultrasonic measurement methodology for non-invasive interrogation and identification of fluids in sealed containers

    NASA Astrophysics Data System (ADS)

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-03-01

    Government agencies and homeland security related organizations have identified the need to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a prototype portable, hand-held, hazardous materials acoustic inspection prototype that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials encountered in various law enforcement inspection activities, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the prototype. High bandwidth ultrasonic transducers combined with an advanced pulse compression technique allowed researchers to 1) obtain high signal-to-noise ratios and 2) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of work conducted in the laboratory have demonstrated that the prototype experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  1. Prototype instrument for noninvasive ultrasonic inspection and identification of fluids in sealed containers

    NASA Astrophysics Data System (ADS)

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-05-01

    Government agencies and homeland security related organizations have identified the need to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a prototype portable, handheld, hazardous materials acoustic inspection prototype that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials encountered in various law enforcement inspection activities, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the prototype. High bandwidth ultrasonic transducers combined with an advanced pulse compression technique allowed researchers to 1) obtain high signal-to-noise ratios and 2) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of work conducted in the laboratory have demonstrated that the prototype experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  2. Invariance algorithms for processing NDE signals

    NASA Astrophysics Data System (ADS)

    Mandayam, Shreekanth; Udpa, Lalita; Udpa, Satish S.; Lord, William

    1996-11-01

    Signals that are obtained in a variety of nondestructive evaluation (NDE) processes capture information not only about the characteristics of the flaw, but also reflect variations in the specimen's material properties. Such signal changes may be viewed as anomalies that could obscure defect related information. An example of this situation occurs during in-line inspection of gas transmission pipelines. The magnetic flux leakage (MFL) method is used to conduct noninvasive measurements of the integrity of the pipe-wall. The MFL signals contain information both about the permeability of the pipe-wall and the dimensions of the flaw. Similar operational effects can be found in other NDE processes. This paper presents algorithms to render NDE signals invariant to selected test parameters, while retaining defect related information. Wavelet transform based neural network techniques are employed to develop the invariance algorithms. The invariance transformation is shown to be a necessary pre-processing step for subsequent defect characterization and visualization schemes. Results demonstrating the successful application of the method are presented.

  3. Applications of ICA and fractal dimension in sEMG signal processing for subtle movement analysis: a review.

    PubMed

    Naik, Ganesh R; Arjunan, Sridhar; Kumar, Dinesh

    2011-06-01

    The surface electromyography (sEMG) signal separation and decphompositions has always been an interesting research topic in the field of rehabilitation and medical research. Subtle myoelectric control is an advanced technique concerned with the detection, processing, classification, and application of myoelectric signals to control human-assisting robots or rehabilitation devices. This paper reviews recent research and development in independent component analysis and Fractal dimensional analysis for sEMG pattern recognition, and presents state-of-the-art achievements in terms of their type, structure, and potential application. Directions for future research are also briefly outlined.

  4. Micromechanical Signal Processors

    NASA Astrophysics Data System (ADS)

    Nguyen, Clark Tu-Cuong

    Completely monolithic high-Q micromechanical signal processors constructed of polycrystalline silicon and integrated with CMOS electronics are described. The signal processors implemented include an oscillator, a bandpass filter, and a mixer + filter--all of which are components commonly required for up- and down-conversion in communication transmitters and receivers, and all of which take full advantage of the high Q of micromechanical resonators. Each signal processor is designed, fabricated, then studied with particular attention to the performance consequences associated with miniaturization of the high-Q element. The fabrication technology which realizes these components merges planar integrated circuit CMOS technologies with those of polysilicon surface micromachining. The technologies are merged in a modular fashion, where the CMOS is processed in the first module, the microstructures in a following separate module, and at no point in the process sequence are steps from each module intermixed. Although the advantages of such modularity include flexibility in accommodating new module technologies, the developed process constrained the CMOS metallization to a high temperature refractory metal (tungsten metallization with TiSi _2 contact barriers) and constrained the micromachining process to long-term temperatures below 835^circC. Rapid-thermal annealing (RTA) was used to relieve residual stress in the mechanical structures. To reduce the complexity involved with developing this merged process, capacitively transduced resonators are utilized. High-Q single resonator and spring-coupled micromechanical resonator filters are also investigated, with particular attention to noise performance, bandwidth control, and termination design. The noise in micromechanical filters is found to be fairly high due to poor electromechanical coupling on the micro-scale with present-day technologies. Solutions to this high series resistance problem are suggested, including smaller electrode-to-resonator gaps to increase the coupling capacitance. Active Q-control techniques are demonstrated which control the bandwidth of micromechanical filters and simulate filter terminations with little passband distortion. Noise analysis shows that these active techniques are relatively quiet when compared with other resistive techniques. Modulation techniques are investigated whereby a single resonator or a filter constructed from several such resonators can provide both a mixing and a filtering function, or a filtering and amplitude modulation function. These techniques center around the placement of a carrier signal on the micromechanical resonator. Finally, micro oven stabilization is investigated in an attempt to null the temperature coefficient of a polysilicon micromechanical resonator. Here, surface micromachining procedures are utilized to fabricate a polysilicon resonator on a microplatform--two levels of suspension--equipped with heater and temperature sensing resistors, which are then imbedded in a feedback loop to control the platform (and resonator) temperature. (Abstract shortened by UMI.).

  5. Wavelet Filter Banks for Super-Resolution SAR Imaging

    NASA Technical Reports Server (NTRS)

    Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess

    2011-01-01

    This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.

  6. Spectral imaging applications: Remote sensing, environmental monitoring, medicine, military operations, factory automation and manufacturing

    NASA Technical Reports Server (NTRS)

    Gat, N.; Subramanian, S.; Barhen, J.; Toomarian, N.

    1996-01-01

    This paper reviews the activities at OKSI related to imaging spectroscopy presenting current and future applications of the technology. The authors discuss the development of several systems including hardware, signal processing, data classification algorithms and benchmarking techniques to determine algorithm performance. Signal processing for each application is tailored by incorporating the phenomenology appropriate to the process, into the algorithms. Pixel signatures are classified using techniques such as principal component analyses, generalized eigenvalue analysis and novel very fast neural network methods. The major hyperspectral imaging systems developed at OKSI include the Intelligent Missile Seeker (IMS) demonstration project for real-time target/decoy discrimination, and the Thermal InfraRed Imaging Spectrometer (TIRIS) for detection and tracking of toxic plumes and gases. In addition, systems for applications in medical photodiagnosis, manufacturing technology, and for crop monitoring are also under development.

  7. Ku-band signal design study. [space shuttle orbiter data processing network

    NASA Technical Reports Server (NTRS)

    Rubin, I.

    1978-01-01

    Analytical tools, methods and techniques for assessing the design and performance of the space shuttle orbiter data processing system (DPS) are provided. The computer data processing network is evaluated in the key areas of queueing behavior synchronization and network reliability. The structure of the data processing network is described as well as the system operation principles and the network configuration. The characteristics of the computer systems are indicated. System reliability measures are defined and studied. System and network invulnerability measures are computed. Communication path and network failure analysis techniques are included.

  8. Millimeter-wave signal generation for a wireless transmission system based on on-chip photonic integrated circuit structures.

    PubMed

    Guzmán, R; Carpintero, G; Gordon, C; Orbe, L

    2016-10-15

    We demonstrate and compare two different photonic-based signal sources for generating the carrier wave in a wireless communication link operating in the millimeter-wave range. The first signal source uses the optical heterodyne technique to generate a 113 GHz carrier wave frequency, while the second employs a different technique based on a pulsed mode-locked source with 100 GHz repetition rate frequency. The two optical sources were fabricated in a multi-project wafer run from an active/passive generic integration platform process using standardized building blocks, including multimode interference reflectors which allow us to define the structures on chip, without the need for cleaved facet mirrors. We highlight the superior performance of the mode-locked sources over an optical heterodyne technique. Error-free transmission was achieved in this experiment.

  9. A comparison of high-throughput techniques for assaying circadian rhythms in plants.

    PubMed

    Tindall, Andrew J; Waller, Jade; Greenwood, Mark; Gould, Peter D; Hartwell, James; Hall, Anthony

    2015-01-01

    Over the last two decades, the development of high-throughput techniques has enabled us to probe the plant circadian clock, a key coordinator of vital biological processes, in ways previously impossible. With the circadian clock increasingly implicated in key fitness and signalling pathways, this has opened up new avenues for understanding plant development and signalling. Our tool-kit has been constantly improving through continual development and novel techniques that increase throughput, reduce costs and allow higher resolution on the cellular and subcellular levels. With circadian assays becoming more accessible and relevant than ever to researchers, in this paper we offer a review of the techniques currently available before considering the horizons in circadian investigation at ever higher throughputs and resolutions.

  10. Review of progress in quantitative NDE. [Nondestructive Evaluation (NDE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-01-01

    This booklet is composed of abstracts from papers submitted at a meeting on quantitative NDE. A multitude of topics are discussed including analysis of composite materials, NMR uses, x-ray instruments and techniques, manufacturing uses, neural networks, eddy currents, stress measurements, magnetic materials, adhesive bonds, signal processing, NDE of mechanical structures, tomography,defect sizing, NDE of plastics and ceramics, new techniques, optical and electromagnetic techniques, and nonlinear techniques. (GHH)

  11. Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)

    1999-01-01

    A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.

  12. Application of Data Mining and Knowledge Discovery Techniques to Enhance Binary Target Detection and Decision-Making for Compromised Visual Images

    DTIC Science & Technology

    2004-11-01

    affords exciting opportunities in target detection. The input signal may be a sum of sine waves, it could be an auditory signal, or possibly a visual...rendering of a scene. Since image processing is an area in which the original data are stationary in some sense ( auditory signals suffer from...11 Example 1 of SR - Identification of a Subliminal Signal below a Threshold .......................... 13 Example 2 of SR

  13. The DCU: the detector control unit for SPICA-SAFARI

    NASA Astrophysics Data System (ADS)

    Clénet, Antoine; Ravera, Laurent; Bertrand, Bernard; den Hartog, Roland H.; Jackson, Brian D.; van Leeuven, Bert-Joost; van Loon, Dennis; Parot, Yann; Pointecouteau, Etienne; Sournac, Anthony

    2014-08-01

    IRAP is developing the warm electronic, so called Detector Control Unit" (DCU), in charge of the readout of the SPICA-SAFARI's TES type detectors. The architecture of the electronics used to readout the 3 500 sensors of the 3 focal plane arrays is based on the frequency domain multiplexing technique (FDM). In each of the 24 detection channels the data of up to 160 pixels are multiplexed in frequency domain between 1 and 3:3 MHz. The DCU provides the AC signals to voltage-bias the detectors; it demodulates the detectors data which are readout in the cold by a SQUID; and it computes a feedback signal for the SQUID to linearize the detection chain in order to optimize its dynamic range. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several µs) and with fast signals (i.e. frequency carriers at 3:3 MHz). This digital signal processing is complex and has to be done at the same time for the 3 500 pixels. It thus requires an optimisation of the power consumption. We took the advantage of the relatively reduced science signal bandwidth (i.e. 20 - 40 Hz) to decouple the signal sampling frequency (10 MHz) and the data processing rate. Thanks to this method we managed to reduce the total number of operations per second and thus the power consumption of the digital processing circuit by a factor of 10. Moreover we used time multiplexing techniques to share the resources of the circuit (e.g. a single BBFB module processes 32 pixels). The current version of the firmware is under validation in a Xilinx Virtex 5 FPGA, the final version will be developed in a space qualified digital ASIC. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed the operation of the detection and readout chains requires to properly define more than 17 500 parameters (about 5 parameters per pixel). Thus it is mandatory to work out an automatic procedure to set up these optimal values. We defined a fast algorithm which characterizes the phase correction to be applied by the BBFB firmware and the pixel resonance frequencies. We also defined a technique to define the AC-carrier initial phases in such a way that the amplitude of their sum is minimized (for a better use of the DAC dynamic range).

  14. Real-time windowing in imaging radar using FPGA technique

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Escamilla-Hernandez, Enrique

    2005-02-01

    The imaging radar uses the high frequency electromagnetic waves reflected from different objects for estimating of its parameters. Pulse compression is a standard signal processing technique used to minimize the peak transmission power and to maximize SNR, and to get a better resolution. Usually the pulse compression can be achieved using a matched filter. The level of the side-lobes in the imaging radar can be reduced using the special weighting function processing. There are very known different weighting functions: Hamming, Hanning, Blackman, Chebyshev, Blackman-Harris, Kaiser-Bessel, etc., widely used in the signal processing applications. Field Programmable Gate Arrays (FPGAs) offers great benefits like instantaneous implementation, dynamic reconfiguration, design, and field programmability. This reconfiguration makes FPGAs a better solution over custom-made integrated circuits. This work aims at demonstrating a reasonably flexible implementation of FM-linear signal and pulse compression using Matlab, Simulink, and System Generator. Employing FPGA and mentioned software we have proposed the pulse compression design on FPGA using classical and novel windows technique to reduce the side-lobes level. This permits increasing the detection ability of the small or nearly placed targets in imaging radar. The advantage of FPGA that can do parallelism in real time processing permits to realize the proposed algorithms. The paper also presents the experimental results of proposed windowing procedure in the marine radar with such the parameters: signal is linear FM (Chirp); frequency deviation DF is 9.375MHz; the pulse width T is 3.2μs taps number in the matched filter is 800 taps; sampling frequency 253.125*106 MHz. It has been realized the reducing of side-lobes levels in real time permitting better resolution of the small targets.

  15. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  16. Smart Sensors: Why and when the origin was and why and where the future will be

    NASA Astrophysics Data System (ADS)

    Corsi, C.

    2013-12-01

    Smart Sensors is a technique developed in the 70's when the processing capabilities, based on readout integrated with signal processing, was still far from the complexity needed in advanced IR surveillance and warning systems, because of the enormous amount of noise/unwanted signals emitted by operating scenario especially in military applications. The Smart Sensors technology was kept restricted within a close military environment exploding in applications and performances in the 90's years thanks to the impressive improvements in the integrated signal read-out and processing achieved by CCD-CMOS technologies in FPA. In fact the rapid advances of "very large scale integration" (VLSI) processor technology and mosaic EO detector array technology allowed to develop new generations of Smart Sensors with much improved signal processing by integrating microcomputers and other VLSI signal processors. inside the sensor structure achieving some basic functions of living eyes (dynamic stare, non-uniformity compensation, spatial and temporal filtering). New and future technologies (Nanotechnology, Bio-Organic Electronics, Bio-Computing) are lightning a new generation of Smart Sensors extending the Smartness from the Space-Time Domain to Spectroscopic Functional Multi-Domain Signal Processing. History and future forecasting of Smart Sensors will be reported.

  17. Signal Decomposition of High Resolution Time Series River Data to Separate Local and Regional Components of Conductivity

    EPA Science Inventory

    Signal processing techniques were applied to high-resolution time series data obtained from conductivity loggers placed upstream and downstream of an oil and gas wastewater treatment facility along a river. Data was collected over 14-60 days. The power spectral density was us...

  18. A Preliminary Assessment of Soviet Development of Optimum Signal Discrimination Techniques: Optimum Space-Time Processing

    DTIC Science & Technology

    1982-10-01

    thermal noise and radioastronomy is probably the application Shirman had in mind for that work. Kuriksha considers a wide class of two-dimensional...this point has been discussed In terms of EM wave propagation, signal detection, and parameter estimation in such fields as radar and radioastronomy

  19. Phase editing as a signal pre-processing step for automated bearing fault detection

    NASA Astrophysics Data System (ADS)

    Barbini, L.; Ompusunggu, A. P.; Hillis, A. J.; du Bois, J. L.; Bartic, A.

    2017-07-01

    Scheduled maintenance and inspection of bearing elements in industrial machinery contributes significantly to the operating costs. Savings can be made through automatic vibration-based damage detection and prognostics, to permit condition-based maintenance. However automation of the detection process is difficult due to the complexity of vibration signals in realistic operating environments. The sensitivity of existing methods to the choice of parameters imposes a requirement for oversight from a skilled operator. This paper presents a novel approach to the removal of unwanted vibrational components from the signal: phase editing. The approach uses a computationally-efficient full-band demodulation and requires very little oversight. Its effectiveness is tested on experimental data sets from three different test-rigs, and comparisons are made with two state-of-the-art processing techniques: spectral kurtosis and cepstral pre- whitening. The results from the phase editing technique show a 10% improvement in damage detection rates compared to the state-of-the-art while simultaneously improving on the degree of automation. This outcome represents a significant contribution in the pursuit of fully automatic fault detection.

  20. SETI - A preliminary search for narrowband signals at microwave frequencies

    NASA Technical Reports Server (NTRS)

    Cuzzi, J. N.; Clark, T. A.; Tarter, J. C.; Black, D. C.

    1977-01-01

    In the search for intelligent signals of extraterrestrial origin, certain forms of signals merit immediate and special attention. Extremely narrowband signals of spectral width similar to our own television transmissions are most favored energetically and least likely to be confused with natural celestial emission. A search of selected stars has been initiated using observational and data processing techniques optimized for the detection of such signals. These techniques allow simultaneous observation of 10 to the 5th to 10 to the 6th channels within the observed spectral range. About two hundred nearby (within 80 LY) solar type stars have been observed at frequencies near the main microwave transitions of the hydroxyl radical. In addition, several molecular (hydroxyl) masers and other non-thermal sources have been observed in this way in order to uncover any possible fine spectral structure of natural origin and to investigate the potential of such an instrument for radioastronomy.

  1. Machine learning based Intelligent cognitive network using fog computing

    NASA Astrophysics Data System (ADS)

    Lu, Jingyang; Li, Lun; Chen, Genshe; Shen, Dan; Pham, Khanh; Blasch, Erik

    2017-05-01

    In this paper, a Cognitive Radio Network (CRN) based on artificial intelligence is proposed to distribute the limited radio spectrum resources more efficiently. The CRN framework can analyze the time-sensitive signal data close to the signal source using fog computing with different types of machine learning techniques. Depending on the computational capabilities of the fog nodes, different features and machine learning techniques are chosen to optimize spectrum allocation. Also, the computing nodes send the periodic signal summary which is much smaller than the original signal to the cloud so that the overall system spectrum source allocation strategies are dynamically updated. Applying fog computing, the system is more adaptive to the local environment and robust to spectrum changes. As most of the signal data is processed at the fog level, it further strengthens the system security by reducing the communication burden of the communications network.

  2. Filter design for cancellation of baseline-fluctuation in needle EMG recordings.

    PubMed

    Rodríguez-Carreño, I; Malanda-Trigueros, A; Gila-Useros, L; Navallas-Irujo, J; Rodríguez-Falces, J

    2006-01-01

    Appropriate cancellation of the baseline fluctuation (BLF) is an important issue when recording EMG signals as it may degrade signal quality and distort qualitative and quantitative analysis. We present a novel filter-design approach for automatic cancellation of the BLF based on several signal processing techniques used sequentially. The methodology is to estimate the spectral content of the BLF, and then to use this estimation to design a high-pass FIR filter that cancel the BLF present in the signal. Two merit figures are devised for measuring the degree of BLF present in an EMG record. These figures are used to compare our method with the conventional approach, which naively considers the baseline course to be of constant (without any fluctuation) potential shift. Applications of the technique on real and simulated EMG signals show the superior performance of our approach in terms of both visual inspection and the merit figures.

  3. High-Speed, capacitance-based tip clearance sensing

    NASA Astrophysics Data System (ADS)

    Haase, W. C.; Haase, Z. S.

    This paper discusses recent advances in tip clearance measurement systems for turbine engines using capacitive probes. Real time measurements of individual blade pulses are generated using wideband signal processing providing 3 dB bandwidths of typically 5 MHz. Subsequent mixed-signal processing circuitry provide real-time measurements of maximum, minimum, and average clearance with latencies of one blade-to-blade time interval. Both guarded and unguarded probe configurations are possible with the system. Calibration techniques provide high accuracy measurements.

  4. The impact of environmental factors on the performance of millimeter wave seekers in smart munitions

    NASA Astrophysics Data System (ADS)

    Hager, R.

    1987-08-01

    An assessment has been made of the degradation in performance of horizontal-glide smart munitions incorporating millimeter wave seekers operating in three types of environments. Atmospheric effects are shown to degrade performance appreciably only in very severe weather conditions. Electromagnetic line-of-sight masking due to foliage (forest canopy and tree-lined roads) will limit submunition usage and may be a potential problem. The most serious problem involves the confident detection of military vehicles in the presence of land clutter. Standard signal processing techniques involving signal amplitude and signal averaging are not likely to be adequate for detection. Observations regarding more sophisticated techniques and the current state of research are included.

  5. Epileptic seizure detection in EEG signal using machine learning techniques.

    PubMed

    Jaiswal, Abeg Kumar; Banka, Haider

    2018-03-01

    Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.

  6. Motor unit action potential conduction velocity estimated from surface electromyographic signals using image processing techniques.

    PubMed

    Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira

    2015-09-17

    In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.

  7. Evaluating Acoustic Emission Signals as an in situ process monitoring technique for Selective Laser Melting (SLM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, Karl A.; Candy, Jim V.; Guss, Gabe

    2016-10-14

    In situ real-time monitoring of the Selective Laser Melting (SLM) process has significant implications for the AM community. The ability to adjust the SLM process parameters during a build (in real-time) can save time, money and eliminate expensive material waste. Having a feedback loop in the process would allow the system to potentially ‘fix’ problem regions before a next powder layer is added. In this study we have investigated acoustic emission (AE) phenomena generated during the SLM process, and evaluated the results in terms of a single process parameter, of an in situ process monitoring technique.

  8. The treatment of tendon injury with electromagnetic fields evidenced by advanced ultrasound image processing.

    PubMed

    Parker, Richard; Markov, Marko

    2015-09-01

    This article presents a novel modality for accelerating the repair of tendon and ligament lesions by means of a specifically designed electromagnetic field in an equine model. This novel therapeutic approach employs a delivery system that induces a specific electrical signal from an external magnetic field derived from Superconductive QUantum Interference Device (SQUID) measurements of injured vs. healthy tissue. Evaluation of this therapy technique is enabled by a proposed new technology described as Predictive Analytical Imagery (PAI™). This technique examines an ultrasound grayscale image and seeks to evaluate it by means of look-ahead predictive algorithms and digital signal processing. The net result is a significant reduction in background noise and the production of a high-resolution grayscale or digital image.

  9. A portable detection instrument based on DSP for beef marbling

    NASA Astrophysics Data System (ADS)

    Zhou, Tong; Peng, Yankun

    2014-05-01

    Beef marbling is one of the most important indices to assess beef quality. Beef marbling is graded by the measurement of the fat distribution density in the rib-eye region. However quality grades of beef in most of the beef slaughtering houses and businesses depend on trainees using their visual senses or comparing the beef slice to the Chinese standard sample cards. Manual grading demands not only great labor but it also lacks objectivity and accuracy. Aiming at the necessity of beef slaughtering houses and businesses, a beef marbling detection instrument was designed. The instrument employs Charge-coupled Device (CCD) imaging techniques, digital image processing, Digital Signal Processor (DSP) control and processing techniques and Liquid Crystal Display (LCD) screen display techniques. The TMS320DM642 digital signal processor of Texas Instruments (TI) is the core that combines high-speed data processing capabilities and real-time processing features. All processes such as image acquisition, data transmission, image processing algorithms and display were implemented on this instrument for a quick, efficient, and non-invasive detection of beef marbling. Structure of the system, working principle, hardware and software are introduced in detail. The device is compact and easy to transport. The instrument can determine the grade of beef marbling reliably and correctly.

  10. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  11. High-Speed Data Acquisition and Digital Signal Processing System for PET Imaging Techniques Applied to Mammography

    NASA Astrophysics Data System (ADS)

    Martinez, J. D.; Benlloch, J. M.; Cerda, J.; Lerche, Ch. W.; Pavon, N.; Sebastia, A.

    2004-06-01

    This paper is framed into the Positron Emission Mammography (PEM) project, whose aim is to develop an innovative gamma ray sensor for early breast cancer diagnosis. Currently, breast cancer is detected using low-energy X-ray screening. However, functional imaging techniques such as PET/FDG could be employed to detect breast cancer and track disease changes with greater sensitivity. Furthermore, a small and less expensive PET camera can be utilized minimizing main problems of whole body PET. To accomplish these objectives, we are developing a new gamma ray sensor based on a newly released photodetector. However, a dedicated PEM detector requires an adequate data acquisition (DAQ) and processing system. The characterization of gamma events needs a free-running analog-to-digital converter (ADC) with sampling rates of more than 50 Ms/s and must achieve event count rates up to 10 MHz. Moreover, comprehensive data processing must be carried out to obtain event parameters necessary for performing the image reconstruction. A new generation digital signal processor (DSP) has been used to comply with these requirements. This device enables us to manage the DAQ system at up to 80 Ms/s and to execute intensive calculi over the detector signals. This paper describes our designed DAQ and processing architecture whose main features are: very high-speed data conversion, multichannel synchronized acquisition with zero dead time, a digital triggering scheme, and high throughput of data with an extensive optimization of the signal processing algorithms.

  12. Software algorithms for false alarm reduction in LWIR hyperspectral chemical agent detection

    NASA Astrophysics Data System (ADS)

    Manolakis, D.; Model, J.; Rossacci, M.; Zhang, D.; Ontiveros, E.; Pieper, M.; Seeley, J.; Weitz, D.

    2008-04-01

    The long-wave infrared (LWIR) hyperpectral sensing modality is one that is often used for the problem of detection and identification of chemical warfare agents (CWA) which apply to both military and civilian situations. The inherent nature and complexity of background clutter dictates a need for sophisticated and robust statistical models which are then used in the design of optimum signal processing algorithms that then provide the best exploitation of hyperspectral data to ultimately make decisions on the absence or presence of potentially harmful CWAs. This paper describes the basic elements of an automated signal processing pipeline developed at MIT Lincoln Laboratory. In addition to describing this signal processing architecture in detail, we briefly describe the key signal models that form the foundation of these algorithms as well as some spatial processing techniques used for false alarm mitigation. Finally, we apply this processing pipeline to real data measured by the Telops FIRST hyperspectral (FIRST) sensor to demonstrate its practical utility for the user community.

  13. Smart signal processing for an evolving electric grid

    NASA Astrophysics Data System (ADS)

    Silva, Leandro Rodrigues Manso; Duque, Calos Augusto; Ribeiro, Paulo F.

    2015-12-01

    Electric grids are interconnected complex systems consisting of generation, transmission, distribution, and active loads, recently called prosumers as they produce and consume electric energy. Additionally, these encompass a vast array of equipment such as machines, power transformers, capacitor banks, power electronic devices, motors, etc. that are continuously evolving in their demand characteristics. Given these conditions, signal processing is becoming an essential assessment tool to enable the engineer and researcher to understand, plan, design, and operate the complex and smart electronic grid of the future. This paper focuses on recent developments associated with signal processing applied to power system analysis in terms of characterization and diagnostics. The following techniques are reviewed and their characteristics and applications discussed: active power system monitoring, sparse representation of power system signal, real-time resampling, and time-frequency (i.e., wavelets) applied to power fluctuations.

  14. Optical signal processing techniques and applications of optical phase modulation in high-speed communication systems

    NASA Astrophysics Data System (ADS)

    Deng, Ning

    In recent years, optical phase modulation has attracted much research attention in the field of fiber optic communications. Compared with the traditional optical intensity-modulated signal, one of the main merits of the optical phase-modulated signal is the better transmission performance. For optical phase modulation, in spite of the comprehensive study of its transmission performance, only a little research has been carried out in terms of its functions, applications and signal processing for future optical networks. These issues are systematically investigated in this thesis. The research findings suggest that optical phase modulation and its signal processing can greatly facilitate flexible network functions and high bandwidth which can be enjoyed by end users. In the thesis, the most important physical-layer technology, signal processing and multiplexing, are investigated with optical phase-modulated signals. Novel and advantageous signal processing and multiplexing approaches are proposed and studied. Experimental investigations are also reported and discussed in the thesis. Optical time-division multiplexing and demultiplexing. With the ever-increasing demand on communication bandwidth, optical time division multiplexing (OTDM) is an effective approach to upgrade the capacity of each wavelength channel in current optical systems. OTDM multiplexing can be simply realized, however, the demultiplexing requires relatively complicated signal processing and stringent timing control, and thus hinders its practicability. To tackle this problem, in this thesis a new OTDM scheme with hybrid DPSK and OOK signals is proposed. Experimental investigation shows this scheme can greatly enhance the demultiplexing timing misalignment and improve the demultiplexing performance, and thus make OTDM more practical and cost effective. All-optical signal processing. In current and future optical communication systems and networks, the data rate per wavelength has been approaching the speed limitation of electronics. Thus, all-optical signal processing techniques are highly desirable to support the necessary optical switching functionalities in future ultrahigh-speed optical packet-switching networks. To cope with the wide use of optical phase-modulated signals, in the thesis, an all-optical logic for DPSK or PSK input signals is developed, for the first time. Based on four-wave mixing in semiconductor optical amplifier, the structure of the logic gate is simple, compact, and capable of supporting ultrafast operation. In addition to the general logic processing, a simple label recognition scheme, as a specific signal processing function, is proposed for phase-modulated label signals. The proposed scheme can recognize any incoming label pattern according to the local pattern, and is potentially capable of handling variable-length label patterns. Optical access network with multicast overlay and centralized light sources. In the arena of optical access networks, wavelength division multiplexing passive optical network (WDM-PON) is a promising technology to deliver high-speed data traffic. However, most of proposed WDM-PONs only support conventional point-to-point service, and cannot meet the requirement of increasing demand on broadcast and multicast service. In this thesis, a simple network upgrade is proposed based on the traditional PON architecture to support both point-to-point and multicast service. In addition, the two service signals are modulated on the same lightwave carrier. The upstream signal is also remodulated on the same carrier at the optical network unit, which can significantly relax the requirement on wavelength management at the network unit.

  15. Estimation of Target Angular Position Under Mainbeam Jamming Conditions,

    DTIC Science & Technology

    1995-12-01

    technique, Multiple Signal Classification ( MUSIC ), is used to estimate the target Direction Of Arrival (DOA) from the processed data vectors. The model...used in the MUSIC technique takes into account the fact that the jammer has been cancelled in the target data vector. The performance of this algorithm

  16. Reflectometric measurement of plasma imaging and applications

    NASA Astrophysics Data System (ADS)

    Mase, A.; Ito, N.; Oda, M.; Komada, Y.; Nagae, D.; Zhang, D.; Kogi, Y.; Tobimatsu, S.; Maruyama, T.; Shimazu, H.; Sakata, E.; Sakai, F.; Kuwahara, D.; Yoshinaga, T.; Tokuzawa, T.; Nagayama, Y.; Kawahata, K.; Yamaguchi, S.; Tsuji-Iio, S.; Domier, C. W.; Luhmann, N. C., Jr.; Park, H. K.; Yun, G.; Lee, W.; Padhi, S.; Kim, K. W.

    2012-01-01

    Progress in microwave and millimeter-wave technologies has made possible advanced diagnostics for application to various fields, such as, plasma diagnostics, radio astronomy, alien substance detection, airborne and spaceborne imaging radars called as synthetic aperture radars, living body measurements. Transmission, reflection, scattering, and radiation processes of electromagnetic waves are utilized as diagnostic tools. In this report we focus on the reflectometric measurements and applications to biological signals (vital signal detection and breast cancer detection) as well as plasma diagnostics, specifically by use of imaging technique and ultra-wideband radar technique.

  17. Inherent secure communications using lattice based waveform design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugh, Matthew Owen

    2013-12-01

    The wireless communications channel is innately insecure due to the broadcast nature of the electromagnetic medium. Many techniques have been developed and implemented in order to combat insecurities and ensure the privacy of transmitted messages. Traditional methods include encrypting the data via cryptographic methods, hiding the data in the noise floor as in wideband communications, or nulling the signal in the spatial direction of the adversary using array processing techniques. This work analyzes the design of signaling constellations, i.e. modulation formats, to combat eavesdroppers from correctly decoding transmitted messages. It has been shown that in certain channel models the abilitymore » of an adversary to decode the transmitted messages can be degraded by a clever signaling constellation based on lattice theory. This work attempts to optimize certain lattice parameters in order to maximize the security of the data transmission. These techniques are of interest because they are orthogonal to, and can be used in conjunction with, traditional security techniques to create a more secure communication channel.« less

  18. Dual-polarization phase shift processing with the Python ARM Radar Toolkit

    NASA Astrophysics Data System (ADS)

    Collis, S. M.; Lang, T. J.; Mühlbauer, K.; Helmus, J.; North, K.

    2016-12-01

    Weather radars that measure backscatter returns at two orthogonal polarizations can give unique insight into storm macro and microphysics. Phase shift between the two polarizations caused by anisotropy in the liquid water path can be used as a constraint in rainfall rate and drop size distribution retrievals, and has the added benefit of being robust to attenuation and radar calibration. The measurement is complicated, however, by the impact of phase shift on backscatter in the presence of large drops and when the pulse volume is not filled uniformly by scatterers (known as partial beam filling). This has led to a signal processing challenge of separating the underlying desired signal from the transient signal, a challenge that has attracted many diverse solutions. To this end, the Python-ARM Radar Toolkit (Py-ART) [1] becomes increasingly important. By providing an open architecture for implementation of retrieval techniques, Py-ART has attracted three very different approaches to the phase processing problem: a fully variational technique, a finite impulse response filter technique [2], and a technique based on a linear programming [3]. These either exist within the toolkit or in another open source package that uses the Py-ART architecture. This presentation will provide an overview of differential phase and specific differential phase observed at C- and S-band frequencies, the signal processing behind the three aforementioned techniques, and some examples of their application. The goal of this presentation is to highlight the importance of open source architectures such as Py-ART for geophysical retrievals. [1] Helmus, J.J. & Collis, S.M., (2016). The Python ARM Radar Toolkit (Py-ART), a Library for Working with Weather Radar Data in the Python Programming Language. JORS. 4(1), p.e25. DOI: http://doi.org/10.5334/jors.119[2] Timothy J. Lang, David A. Ahijevych, Stephen W. Nesbitt, Richard E. Carbone, Steven A. Rutledge, and Robert Cifelli, 2007: Radar-Observed Characteristics of Precipitating Systems during NAME 2004. J. Climate, 20, 1713-1733. doi: http://dx.doi.org/10.1175/JCLI4082.1[3] Scott E. Giangrande, Robert McGraw, and Lei Lei, 2013: An Application of Linear Programming to Polarimetric Radar Differential Phase Processing. JTECH. 30, 1716-1729, doi: 10.1175/JTECH-D-12-00147.1.

  19.   Ultrasonic monitoring of fish thawing process optimal time of thawing and effect of freezing/thawing.

    PubMed

    El Kadi, Youssef Ait; Moudden, Ali; Faiz, Bouazza; Maze, Gerard; Decultot, Dominique

    2013-01-01

    Fish quality is traditionally controlled by chemical and microbiological analysis. The non-destructive control presents an enormous professional interest thanks to the technical contribution and precision of the analysis to which it leads. This paper presents the results obtained from a characterisation of fish thaw-ing process by the ultrasonic technique, with monitoring thermal processing from frozen to defrosted states. The study was carried out on fish type red drum and salmon cut into fillets of 15 mm thickness. After being frozen at -20°C, the sample is enclosed in a plexiglas vessel with parallel walls at the ambient temperature 30°C and excited in perpendicular incidence at 0.5 MHz by an ultrasonic pulser-receiver Sofranel 5052PR. the technique of measurement consists to study the signals reflected by fish during its thawing, the specific techniques of signal processing are implemented to deduce informations characterizing the state of fish and its thawing process by examining the evolution of the position echoes reflected by the sample and the viscoelastic parameters of fish during its thawing. The obtained results show a relationship between the thermal state of fish and its acoustic properties, which allowed to deduce the optimal time of the first thawing in order to restrict the growth of microbial flora. For salmon, the results show a decrease of 36% of the time of the second thawing and an increase of 10.88% of the phase velocity, with a decrease of 65.5% of the peak-to-peak voltage of the signal reflected, thus a decrease of the acoustic impedance. This study shows an optimal time and an evolution rate of thawing specific to each type offish and a correlation between the acoustic behavior of fish and its thermal state which approves that this technique of ultrasonic monitoring can substitute the control using the destructive chemical analysis in order to monitor the thawing process and to know whether a fish has suffered an accidental thawing.

  20. Flow-based analysis using microfluidics-chemiluminescence systems.

    PubMed

    Al Lawati, Haider A J

    2013-01-01

    This review will discuss various approaches and techniques in which analysis using microfluidics-chemiluminescence systems (MF-CL) has been reported. A variety of applications is examined, including environmental, pharmaceutical, biological, food and herbal analysis. Reported uses of CL reagents, sample introduction techniques, sample pretreatment methods, CL signal enhancement and detection systems are discussed. A hydrodynamic pumping system is predominately used for these applications. However, several reports are available in which electro-osmotic (EO) pumping has been implemented. Various sample pretreatment methods have been used, including liquid-liquid extraction, solid-phase extraction and molecularly imprinted polymers. A wide range of innovative techniques has been reported for CL signal enhancement. Most of these techniques are based on enhancement of the mixing process in the microfluidics channels, which leads to enhancement of the CL signal. However, other techniques are also reported, such as mirror reaction, liquid core waveguide, on-line pre-derivatization and the use of an opaque white chip with a thin transparent seal. Photodetectors are the most commonly used detectors; however, other detection systems have also been used, including integrated electrochemiluminescence (ECL) and organic photodiodes (OPDs). Copyright © 2012 John Wiley & Sons, Ltd.

  1. Muscle activity characterization by laser Doppler Myography

    NASA Astrophysics Data System (ADS)

    Scalise, Lorenzo; Casaccia, Sara; Marchionni, Paolo; Ercoli, Ilaria; Primo Tomasini, Enrico

    2013-09-01

    Electromiography (EMG) is the gold-standard technique used for the evaluation of muscle activity. This technique is used in biomechanics, sport medicine, neurology and rehabilitation therapy and it provides the electrical activity produced by skeletal muscles. Among the parameters measured with EMG, two very important quantities are: signal amplitude and duration of muscle contraction, muscle fatigue and maximum muscle power. Recently, a new measurement procedure, named Laser Doppler Myography (LDMi), for the non contact assessment of muscle activity has been proposed to measure the vibro-mechanical behaviour of the muscle. The aim of this study is to present the LDMi technique and to evaluate its capacity to measure some characteristic features proper of the muscle. In this paper LDMi is compared with standard superficial EMG (sEMG) requiring the application of sensors on the skin of each patient. sEMG and LDMi signals have been simultaneously acquired and processed to test correlations. Three parameters has been analyzed to compare these techniques: Muscle activation timing, signal amplitude and muscle fatigue. LDMi appears to be a reliable and promising measurement technique allowing the measurements without contact with the patient skin.

  2. SIG-VISA: Signal-based Vertically Integrated Seismic Monitoring

    NASA Astrophysics Data System (ADS)

    Moore, D.; Mayeda, K. M.; Myers, S. C.; Russell, S.

    2013-12-01

    Traditional seismic monitoring systems rely on discrete detections produced by station processing software; however, while such detections may constitute a useful summary of station activity, they discard large amounts of information present in the original recorded signal. We present SIG-VISA (Signal-based Vertically Integrated Seismic Analysis), a system for seismic monitoring through Bayesian inference on seismic signals. By directly modeling the recorded signal, our approach incorporates additional information unavailable to detection-based methods, enabling higher sensitivity and more accurate localization using techniques such as waveform matching. SIG-VISA's Bayesian forward model of seismic signal envelopes includes physically-derived models of travel times and source characteristics as well as Gaussian process (kriging) statistical models of signal properties that combine interpolation of historical data with extrapolation of learned physical trends. Applying Bayesian inference, we evaluate the model on earthquakes as well as the 2009 DPRK test event, demonstrating a waveform matching effect as part of the probabilistic inference, along with results on event localization and sensitivity. In particular, we demonstrate increased sensitivity from signal-based modeling, in which the SIGVISA signal model finds statistical evidence for arrivals even at stations for which the IMS station processing failed to register any detection.

  3. A Baseline-Free Defect Imaging Technique in Plates Using Time Reversal of Lamb Waves

    NASA Astrophysics Data System (ADS)

    Hyunjo, Jeong; Sungjong, Cho; Wei, Wei

    2011-06-01

    We present an analytical investigation for a baseline-free imaging of a defect in plate-like structures using the time-reversal of Lamb waves. We first consider the flexural wave (A0 mode) propagation in a plate containing a defect, and reception and time reversal process of the output signal at the receiver. The received output signal is then composed of two parts: a directly propagated wave and a scattered wave from the defect. The time reversal of these waves recovers the original input signal, and produces two additional sidebands that contain the time-of-flight information on the defect location. One of the side-band signals is then extracted as a pure defect signal. A defect localization image is then constructed from a beamforming technique based on the time-frequency analysis of the side band signal for each transducer pair in a network of sensors. The simulation results show that the proposed scheme enables the accurate, baseline-free imaging of a defect.

  4. A Novel AMARS Technique for Baseline Wander Removal Applied to Photoplethysmogram.

    PubMed

    Timimi, Ammar A K; Ali, M A Mohd; Chellappan, K

    2017-06-01

    A new digital filter, AMARS (aligning minima of alternating random signal) has been derived using trigonometry to regulate signal pulsations inline. The pulses are randomly presented in continuous signals comprising frequency band lower than the signal's mean rate. Frequency selective filters are conventionally employed to reject frequencies undesired by specific applications. However, these conventional filters only reduce the effects of the rejected range producing a signal superimposed by some baseline wander (BW). In this work, filters of different ranges and techniques were independently configured to preprocess a photoplethysmogram, an optical biosignal of blood volume dynamics, producing wave shapes with several BWs. The AMARS application effectively removed the encountered BWs to assemble similarly aligned trends. The removal implementation was found repeatable in both ear and finger photoplethysmograms, emphasizing the importance of BW removal in biosignal processing in retaining its structural, functional and physiological properties. We also believe that AMARS may be relevant to other biological and continuous signals modulated by similar types of baseline volatility.

  5. Application of filtering techniques in preprocessing magnetic data

    NASA Astrophysics Data System (ADS)

    Liu, Haijun; Yi, Yongping; Yang, Hongxia; Hu, Guochuang; Liu, Guoming

    2010-08-01

    High precision magnetic exploration is a popular geophysical technique for its simplicity and its effectiveness. The explanation in high precision magnetic exploration is always a difficulty because of the existence of noise and disturbance factors, so it is necessary to find an effective preprocessing method to get rid of the affection of interference factors before further processing. The common way to do this work is by filtering. There are many kinds of filtering methods. In this paper we introduced in detail three popular kinds of filtering techniques including regularized filtering technique, sliding averages filtering technique, compensation smoothing filtering technique. Then we designed the work flow of filtering program based on these techniques and realized it with the help of DELPHI. To check it we applied it to preprocess magnetic data of a certain place in China. Comparing the initial contour map with the filtered contour map, we can see clearly the perfect effect our program. The contour map processed by our program is very smooth and the high frequency parts of data are disappeared. After filtering, we separated useful signals and noisy signals, minor anomaly and major anomaly, local anomaly and regional anomaly. It made us easily to focus on the useful information. Our program can be used to preprocess magnetic data. The results showed the effectiveness of our program.

  6. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  7. Parachute deploy/Release mechanism

    NASA Technical Reports Server (NTRS)

    Robelen, D. B.

    1979-01-01

    Mechanism operated by signals from single radio-control channel deploy and releases small drogue parachute from flying aircraft. Technique has uses in industrial process control and in recreational hobby applications.

  8. Validation of nonlinear interferometric vibrational imaging as a molecular OCT technique by the use of Raman microscopy

    NASA Astrophysics Data System (ADS)

    Benalcazar, Wladimir A.; Jiang, Zhi; Marks, Daniel L.; Geddes, Joseph B.; Boppart, Stephen A.

    2009-02-01

    We validate a molecular imaging technique called Nonlinear Interferometric Vibrational Imaging (NIVI) by comparing vibrational spectra with those acquired from Raman microscopy. This broadband coherent anti-Stokes Raman scattering (CARS) technique uses heterodyne detection and OCT acquisition and design principles to interfere a CARS signal generated by a sample with a local oscillator signal generated separately by a four-wave mixing process. These are mixed and demodulated by spectral interferometry. Its confocal configuration allows the acquisition of 3D images based on endogenous molecular signatures. Images from both phantom and mammary tissues have been acquired by this instrument and its spectrum is compared with its spontaneous Raman signatures.

  9. Broadband quantitative NQR for authentication of vitamins and dietary supplements

    NASA Astrophysics Data System (ADS)

    Chen, Cheng; Zhang, Fengchao; Bhunia, Swarup; Mandal, Soumyajit

    2017-05-01

    We describe hardware, pulse sequences, and algorithms for nuclear quadrupole resonance (NQR) spectroscopy of medicines and dietary supplements. Medicine and food safety is a pressing problem that has drawn more and more attention. NQR is an ideal technique for authenticating these substances because it is a non-invasive method for chemical identification. We have recently developed a broadband NQR front-end that can excite and detect 14N NQR signals over a wide frequency range; its operating frequency can be rapidly set by software, while sensitivity is comparable to conventional narrowband front-ends over the entire range. This front-end improves the accuracy of authentication by enabling multiple-frequency experiments. We have also developed calibration and signal processing techniques to convert measured NQR signal amplitudes into nuclear spin densities, thus enabling its use as a quantitative technique. Experimental results from several samples are used to illustrate the proposed methods.

  10. High-Speed Imaging Optical Pyrometry for Study of Boron Nitride Nanotube Generation

    NASA Technical Reports Server (NTRS)

    Inman, Jennifer A.; Danehy, Paul M.; Jones, Stephen B.; Lee, Joseph W.

    2014-01-01

    A high-speed imaging optical pyrometry system is designed for making in-situ measurements of boron temperature during the boron nitride nanotube synthesis process. Spectrometer measurements show molten boron emission to be essentially graybody in nature, lacking spectral emission fine structure over the visible range of the electromagnetic spectrum. Camera calibration experiments are performed and compared with theoretical calculations to quantitatively establish the relationship between observed signal intensity and temperature. The one-color pyrometry technique described herein involves measuring temperature based upon the absolute signal intensity observed through a narrowband spectral filter, while the two-color technique uses the ratio of the signals through two spectrally separated filters. The present study calibrated both the one- and two-color techniques at temperatures between 1,173 K and 1,591 K using a pco.dimax HD CMOS-based camera along with three such filters having transmission peaks near 550 nm, 632.8 nm, and 800 nm.

  11. IMS R and D program at Canada customs

    NASA Technical Reports Server (NTRS)

    Pilon, Pierre; Mungham, Tony; Ng, Lay-Keow; Lawrence, Andre

    1995-01-01

    Over the last few years, Revenue Canada, in collaboration with Barringer Instruments Limited, has been involved in the development of a field-usable ion mobility spectrometer (IMS) for the detection of drugs of abuse. This work has culminated in the manufacturing and commercialization by Barringer of the Ionscan 350 instruments, now in use by various law enforcement agencies worldwide. Although IMS exhibits a very strong and distinctive response toward some nitrogen containing drugs, e.g., cocaine, like all separation techniques it has inherent limitations, namely moderate resolution and low chemical signal to noise ratio which may affect the reliability of IMS-based drug detectors. A program is in place at the Laboratory and Scientific Services Directorate (LSSD) to investigate the applicability of various digital signal processing (DSP) techniques to IMS output signals. The application of neural network techniques to overlapping IMS peaks is presented.

  12. Advanced Multispectral Scanner (AMS) study. [aircraft remote sensing

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The status of aircraft multispectral scanner technology was accessed in order to develop preliminary design specifications for an advanced instrument to be used for remote sensing data collection by aircraft in the 1980 time frame. The system designed provides a no-moving parts multispectral scanning capability through the exploitation of linear array charge coupled device technology and advanced electronic signal processing techniques. Major advantages include: 10:1 V/H rate capability; 120 deg FOV at V/H = 0.25 rad/sec; 1 to 2 rad resolution; high sensitivity; large dynamic range capability; geometric fidelity; roll compensation; modularity; long life; and 24 channel data acquisition capability. The field flattening techniques of the optical design allow wide field view to be achieved at fast f/nos for both the long and short wavelength regions. The digital signal averaging technique permits maximization of signal to noise performance over the entire V/H rate range.

  13. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    NASA Astrophysics Data System (ADS)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  14. ICC '86; Proceedings of the International Conference on Communications, Toronto, Canada, June 22-25, 1986, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.

  15. Detecting blast-induced infrasound in wind noise.

    PubMed

    Howard, Wheeler B; Dillion, Kevin L; Shields, F Douglas

    2010-03-01

    Current efforts seek to monitor and investigate such naturally occurring events as volcanic eruptions, hurricanes, bolides entering the atmosphere, earthquakes, and tsunamis by the infrasound they generate. Often, detection of the infrasound signal is limited by the masking effect of wind noise. This paper describes the use of a distributed array to detect infrasound signals from four atmospheric detonations at White Sands Missile Range in New Mexico, USA in 2006. Three of the blasts occurred during times of low wind noise and were easily observed with array processing techniques. One blast was obscured by high wind conditions. The results of signal processing are presented that allowed localization of the blast-induced signals in the presence of wind noise in the array response.

  16. Digital signal processing methods for biosequence comparison.

    PubMed Central

    Benson, D C

    1990-01-01

    A method is discussed for DNA or protein sequence comparison using a finite field fast Fourier transform, a digital signal processing technique; and statistical methods are discussed for analyzing the output of this algorithm. This method compares two sequences of length N in computing time proportional to N log N compared to N2 for methods currently used. This method makes it feasible to compare very long sequences. An example is given to show that the method correctly identifies sites of known homology. PMID:2349096

  17. SNMR pulse sequence phase cycling

    DOEpatents

    Walsh, David O; Grunewald, Elliot D

    2013-11-12

    Technologies applicable to SNMR pulse sequence phase cycling are disclosed, including SNMR acquisition apparatus and methods, SNMR processing apparatus and methods, and combinations thereof. SNMR acquisition may include transmitting two or more SNMR pulse sequences and applying a phase shift to a pulse in at least one of the pulse sequences, according to any of a variety cycling techniques. SNMR processing may include combining SNMR from a plurality of pulse sequences comprising pulses of different phases, so that desired signals are preserved and indesired signals are canceled.

  18. Developments in signal processing and interpretation in laser tapping

    NASA Astrophysics Data System (ADS)

    Perton, M.; Neron, C.; Blouin, A.; Monchalin, J.-P.

    2013-01-01

    A novel technique, called laser-tapping, based on the thermoelastic excitation by laser like laser-ultrasonics has been previously introduced for inspecting honeycomb and foam core structures. If the top skin is delaminated or detached from the substrate, the detached layer is driven into vibration. The interpretation of the vibrations in terms of Lamb wave resonances is first discussed for a flat bottom hole configuration and then used to determine appropriate signal processing for samples such as honeycomb structures.

  19. Dynamic single sideband modulation for realizing parametric loudspeaker

    NASA Astrophysics Data System (ADS)

    Sakai, Shinichi; Kamakura, Tomoo

    2008-06-01

    A parametric loudspeaker, that presents remarkably narrow directivity compared with a conventional loudspeaker, is newly produced and examined. To work the loudspeaker optimally, we prototyped digitally a single sideband modulator based on the Weaver method and appropriate signal processing. The processing techniques are to change the carrier amplitude dynamically depending on the envelope of audio signals, and then to operate the square root or fourth root to the carrier amplitude for improving input-output acoustic linearity. The usefulness of the present modulation scheme has been verified experimentally.

  20. pySPACE—a signal processing and classification environment in Python

    PubMed Central

    Krell, Mario M.; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Teiwes, Johannes; Metzen, Jan H.; Kirchner, Elsa A.; Kirchner, Frank

    2013-01-01

    In neuroscience large amounts of data are recorded to provide insights into cerebral information processing and function. The successful extraction of the relevant signals becomes more and more challenging due to increasing complexities in acquisition techniques and questions addressed. Here, automated signal processing and machine learning tools can help to process the data, e.g., to separate signal and noise. With the presented software pySPACE (http://pyspace.github.io/pyspace), signal processing algorithms can be compared and applied automatically on time series data, either with the aim of finding a suitable preprocessing, or of training supervised algorithms to classify the data. pySPACE originally has been built to process multi-sensor windowed time series data, like event-related potentials from the electroencephalogram (EEG). The software provides automated data handling, distributed processing, modular build-up of signal processing chains and tools for visualization and performance evaluation. Included in the software are various algorithms like temporal and spatial filters, feature generation and selection, classification algorithms, and evaluation schemes. Further, interfaces to other signal processing tools are provided and, since pySPACE is a modular framework, it can be extended with new algorithms according to individual needs. In the presented work, the structural hierarchies are described. It is illustrated how users and developers can interface the software and execute offline and online modes. Configuration of pySPACE is realized with the YAML format, so that programming skills are not mandatory for usage. The concept of pySPACE is to have one comprehensive tool that can be used to perform complete signal processing and classification tasks. It further allows to define own algorithms, or to integrate and use already existing libraries. PMID:24399965

  1. pySPACE-a signal processing and classification environment in Python.

    PubMed

    Krell, Mario M; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Teiwes, Johannes; Metzen, Jan H; Kirchner, Elsa A; Kirchner, Frank

    2013-01-01

    In neuroscience large amounts of data are recorded to provide insights into cerebral information processing and function. The successful extraction of the relevant signals becomes more and more challenging due to increasing complexities in acquisition techniques and questions addressed. Here, automated signal processing and machine learning tools can help to process the data, e.g., to separate signal and noise. With the presented software pySPACE (http://pyspace.github.io/pyspace), signal processing algorithms can be compared and applied automatically on time series data, either with the aim of finding a suitable preprocessing, or of training supervised algorithms to classify the data. pySPACE originally has been built to process multi-sensor windowed time series data, like event-related potentials from the electroencephalogram (EEG). The software provides automated data handling, distributed processing, modular build-up of signal processing chains and tools for visualization and performance evaluation. Included in the software are various algorithms like temporal and spatial filters, feature generation and selection, classification algorithms, and evaluation schemes. Further, interfaces to other signal processing tools are provided and, since pySPACE is a modular framework, it can be extended with new algorithms according to individual needs. In the presented work, the structural hierarchies are described. It is illustrated how users and developers can interface the software and execute offline and online modes. Configuration of pySPACE is realized with the YAML format, so that programming skills are not mandatory for usage. The concept of pySPACE is to have one comprehensive tool that can be used to perform complete signal processing and classification tasks. It further allows to define own algorithms, or to integrate and use already existing libraries.

  2. An additional study and implementation of tone calibrated technique of modulation

    NASA Technical Reports Server (NTRS)

    Rafferty, W.; Bechtel, L. K.; Lay, N. E.

    1985-01-01

    The Tone Calibrated Technique (TCT) was shown to be theoretically free from an error floor, and is only limited, in practice, by implementation constraints. The concept of the TCT transmission scheme along with a baseband implementation of a suitable demodulator is introduced. Two techniques for the generation of the TCT signal are considered: a Manchester source encoding scheme (MTCT) and a subcarrier based technique (STCT). The results are summarized for the TCT link computer simulation. The hardware implementation of the MTCT system is addressed and the digital signal processing design considerations involved in satisfying the modulator/demodulator requirements are outlined. The program findings are discussed and future direction are suggested based on conclusions made regarding the suitability of the TCT system for the transmission channel presently under consideration.

  3. Electrical alternans during rest and exercise as predictors of vulnerability to ventricular arrhythmias

    NASA Technical Reports Server (NTRS)

    Estes, N. A. 3rd; Michaud, G.; Zipes, D. P.; El-Sherif, N.; Venditti, F. J.; Rosenbaum, D. S.; Albrecht, P.; Wang, P. J.; Cohen, R. J.

    1997-01-01

    This investigation was performed to evaluate the feasibility of detecting repolarization alternans with the heart rate elevated with a bicycle exercise protocol. Sensitive spectral signal-processing techniques are able to detect beat-to-beat alternation of the amplitude of the T wave, which is not visible on standard electrocardiogram. Previous animal and human investigations using atrial or ventricular pacing have demonstrated that T-wave alternans is a marker of vulnerability to ventricular arrhythmias. Using a spectral analysis technique incorporating noise reduction signal-processing software, we evaluated electrical alternans at rest and with the heart rate elevated during a bicycle exercise protocol. In this study we defined optimal criteria for electrical alternans to separate patients from those without inducible arrhythmias. Alternans and signal-averaged electrocardiographic results were compared with the results of vulnerability to ventricular arrhythmias as defined by induction of sustained ventricular tachycardia or fibrillation at electrophysiologic evaluation. In 27 patients alternans recorded at rest and with exercise had a sensitivity of 89%, specificity of 75%, and overall clinical accuracy of 80% (p <0.003). In this patient population the signal-averaged electrocardiogram was not a significant predictor of arrhythmia vulnerability. This is the first study to report that repolarization alternans can be detected with heart rate elevated with a bicycle exercise protocol. Alternans measured using this technique is an accurate predictor of arrhythmia inducibility.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Yanmei; Li, Xinli; Bai, Yan

    The measurement of multiphase flow parameters is of great importance in a wide range of industries. In the measurement of multiphase, the signals from the sensors are extremely weak and often buried in strong background noise. It is thus desirable to develop effective signal processing techniques that can detect the weak signal from the sensor outputs. In this paper, two methods, i.e., lock-in-amplifier (LIA) and improved Duffing chaotic oscillator are compared to detect and process the weak signal. For sinusoidal signal buried in noise, the correlation detection with sinusoidal reference signal is simulated by using LIA. The improved Duffing chaoticmore » oscillator method, which based on the Wigner transformation, can restore the signal waveform and detect the frequency. Two methods are combined to detect and extract the weak signal. Simulation results show the effectiveness and accuracy of the proposed improved method. The comparative analysis shows that the improved Duffing chaotic oscillator method can restrain noise strongly since it is sensitive to initial conditions.« less

  5. Advanced Ultrasonic Measurement Methodology for Non-Invasive Interrogation and Identification of Fluids in Sealed Containers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-03-16

    The Hazardous Materials Response Unit (HMRU) and the Counterterrorism and Forensic Science Research Unit (CTFSRU), Laboratory Division, Federal Bureau of Investigation (FBI) have been mandated to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a portable, hand-held, hazardous materials acoustic inspection device (HAZAID) that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as wellmore » as container sizes and materials, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The HAZAID prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the HAZAID prototype. High bandwidth ultrasonic transducers combined with the advanced pulse compression technique allowed researchers to 1) impart large amounts of energy, 2) obtain high signal-to-noise ratios, and 3) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of this feasibility study demonstrated that the HAZAID experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.« less

  6. An Intelligent Computer-Based System for Sign Language Tutoring

    ERIC Educational Resources Information Center

    Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy

    2012-01-01

    A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…

  7. Improved Target Detection in Urban Structures Using Distributed Sensing and Fast Data Acquisition Techniques

    DTIC Science & Technology

    2013-04-01

    Trans. Signal Process., vol. 57, no. 6, pp. 2275-2284, 2009. [83] A. Gurbuz, J. IVIcClellan, and W. Scott, "Compressive sensing for subsurface ... imaging using ground penetrating radar," Signal Pracess., vol. 89, no. 10, pp. 1959 -1972, 2009. [84] A. Gurbuz, J. McClellan, and W. Scott, "A

  8. Optical Sensing And Imaging Opportunities

    DTIC Science & Technology

    2016-02-12

    Functional Materials Workshops, supported by AFOSR.Potentially Useful New Research Areas.- Plasmonics - Infrared antennae- IV-VI (lead salt) Infrared Photo...Potentially Useful New Research Areas. - Plasmonics - Infrared antennae - IV-VI (lead salt) Infrared Photo Detectors and Focal Plane Arrays...Hexagonal Ferrite Thin Films for Q-Band Signal Processing Devices Plasmonics New techniques for transmitting optical signals through nano-scale

  9. Tolerance analysis program

    NASA Technical Reports Server (NTRS)

    Watson, H. K.

    1971-01-01

    Digital computer program determines tolerance values of end to end signal chain or flow path, given preselected probability value. Technique is useful in the synthesis and analysis phases of subsystem design processes.

  10. A sequential method for spline approximation with variable knots. [recursive piecewise polynomial signal processing

    NASA Technical Reports Server (NTRS)

    Mier Muth, A. M.; Willsky, A. S.

    1978-01-01

    In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.

  11. Real-time optical fiber digital speckle pattern interferometry for industrial applications

    NASA Astrophysics Data System (ADS)

    Chan, Robert K.; Cheung, Y. M.; Lo, C. H.; Tam, T. K.

    1997-03-01

    There is current interest, especially in the industrial sector, to use the digital speckle pattern interferometry (DSPI) technique to measure surface stress. Indeed, many publications in the subject are evident of the growing interests in the field. However, to bring the technology to industrial use requires the integration of several emerging technologies, viz. optics, feedback control, electronics, imaging processing and digital signal processing. Due to the highly interdisciplinary nature of the technique, successful implementation and development require expertise in all of the fields. At Baptist University, under the funding of a major industrial grant, we are developing the technology for the industrial sector. Our system fully exploits optical fibers and diode lasers in the design to enable practical and rugged systems suited for industrial applications. Besides the development in optics, we have broken away from the reliance of a microcomputer PC platform for both image capture and processing, and have developed a digital signal processing array system that can handle simultaneous and independent image capture/processing with feedback control. The system, named CASPA for 'cascadable architecture signal processing array,' is a third generation development system that utilizes up to 7 digital signal processors has proved to be a very powerful system. With our CASPA we are now in a better position to developing novel optical measurement systems for industrial application that may require different measurement systems to operate concurrently and requiring information exchange between the systems. Applications in mind such as simultaneous in-plane and out-of-plane DSPI image capture/process, vibrational analysis with interactive DSPI and phase shifting control of optical systems are a few good examples of the potentials.

  12. A New Methodology for Vibration Error Compensation of Optical Encoders

    PubMed Central

    Lopez, Jesus; Artes, Mariano

    2012-01-01

    Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new “ad hoc” methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained. PMID:22666067

  13. Mathematical models utilized in the retrieval of displacement information encoded in fringe patterns

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Lamberti, Luciano

    2016-02-01

    All the techniques that measure displacements, whether in the range of visible optics or any other form of field methods, require the presence of a carrier signal. A carrier signal is a wave form modulated (modified) by an input, deformation of the medium. A carrier is tagged to the medium under analysis and deforms with the medium. The wave form must be known both in the unmodulated and the modulated conditions. There are two basic mathematical models that can be utilized to decode the information contained in the carrier, phase modulation or frequency modulation, both are closely connected. Basic problems connected to the detection and recovery of displacement information that are common to all optical techniques will be analyzed in this paper, focusing on the general theory common to all the methods independently of the type of signal utilized. The aspects discussed are those that have practical impact in the process of data gathering and data processing.

  14. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  15. The impact of global signal regression on resting state correlations: Are anti-correlated networks introduced?

    PubMed Central

    Murphy, Kevin; Birn, Rasmus M.; Handwerker, Daniel A.; Jones, Tyler B.; Bandettini, Peter A.

    2009-01-01

    Low-frequency fluctuations in fMRI signal have been used to map several consistent resting state networks in the brain. Using the posterior cingulate cortex as a seed region, functional connectivity analyses have found not only positive correlations in the default mode network but negative correlations in another resting state network related to attentional processes. The interpretation is that the human brain is intrinsically organized into dynamic, anti-correlated functional networks. Global variations of the BOLD signal are often considered nuisance effects and are commonly removed using a general linear model (GLM) technique. This global signal regression method has been shown to introduce negative activation measures in standard fMRI analyses. The topic of this paper is whether such a correction technique could be the cause of anti-correlated resting state networks in functional connectivity analyses. Here we show that, after global signal regression, correlation values to a seed voxel must sum to a negative value. Simulations also show that small phase differences between regions can lead to spurious negative correlation values. A combination breath holding and visual task demonstrates that the relative phase of global and local signals can affect connectivity measures and that, experimentally, global signal regression leads to bell-shaped correlation value distributions, centred on zero. Finally, analyses of negatively correlated networks in resting state data show that global signal regression is most likely the cause of anti-correlations. These results call into question the interpretation of negatively correlated regions in the brain when using global signal regression as an initial processing step. PMID:18976716

  16. The impact of global signal regression on resting state correlations: are anti-correlated networks introduced?

    PubMed

    Murphy, Kevin; Birn, Rasmus M; Handwerker, Daniel A; Jones, Tyler B; Bandettini, Peter A

    2009-02-01

    Low-frequency fluctuations in fMRI signal have been used to map several consistent resting state networks in the brain. Using the posterior cingulate cortex as a seed region, functional connectivity analyses have found not only positive correlations in the default mode network but negative correlations in another resting state network related to attentional processes. The interpretation is that the human brain is intrinsically organized into dynamic, anti-correlated functional networks. Global variations of the BOLD signal are often considered nuisance effects and are commonly removed using a general linear model (GLM) technique. This global signal regression method has been shown to introduce negative activation measures in standard fMRI analyses. The topic of this paper is whether such a correction technique could be the cause of anti-correlated resting state networks in functional connectivity analyses. Here we show that, after global signal regression, correlation values to a seed voxel must sum to a negative value. Simulations also show that small phase differences between regions can lead to spurious negative correlation values. A combination breath holding and visual task demonstrates that the relative phase of global and local signals can affect connectivity measures and that, experimentally, global signal regression leads to bell-shaped correlation value distributions, centred on zero. Finally, analyses of negatively correlated networks in resting state data show that global signal regression is most likely the cause of anti-correlations. These results call into question the interpretation of negatively correlated regions in the brain when using global signal regression as an initial processing step.

  17. Ultrafast Laser-Based Spectroscopy and Sensing: Applications in LIBS, CARS, and THz Spectroscopy

    PubMed Central

    Leahy-Hoppa, Megan R.; Miragliotta, Joseph; Osiander, Robert; Burnett, Jennifer; Dikmelik, Yamac; McEnnis, Caroline; Spicer, James B.

    2010-01-01

    Ultrafast pulsed lasers find application in a range of spectroscopy and sensing techniques including laser induced breakdown spectroscopy (LIBS), coherent Raman spectroscopy, and terahertz (THz) spectroscopy. Whether based on absorption or emission processes, the characteristics of these techniques are heavily influenced by the use of ultrafast pulses in the signal generation process. Depending on the energy of the pulses used, the essential laser interaction process can primarily involve lattice vibrations, molecular rotations, or a combination of excited states produced by laser heating. While some of these techniques are currently confined to sensing at close ranges, others can be implemented for remote spectroscopic sensing owing principally to the laser pulse duration. We present a review of ultrafast laser-based spectroscopy techniques and discuss the use of these techniques to current and potential chemical and environmental sensing applications. PMID:22399883

  18. Objective fitting of hemoglobin dynamics in traumatic bruises based on temperature depth profiling

    NASA Astrophysics Data System (ADS)

    Vidovič, Luka; Milanič, Matija; Majaron, Boris

    2014-02-01

    Pulsed photothermal radiometry (PPTR) allows noninvasive measurement of laser-induced temperature depth profiles. The obtained profiles provide information on depth distribution of absorbing chromophores, such as melanin and hemoglobin. We apply this technique to objectively characterize mass diffusion and decomposition rate of extravasated hemoglobin during the bruise healing process. In present study, we introduce objective fitting of PPTR data obtained over the course of the bruise healing process. By applying Monte Carlo simulation of laser energy deposition and simulation of the corresponding PPTR signal, quantitative analysis of underlying bruise healing processes is possible. Introduction of objective fitting enables an objective comparison between the simulated and experimental PPTR signals. In this manner, we avoid reconstruction of laser-induced depth profiles and thus inherent loss of information in the process. This approach enables us to determine the value of hemoglobin mass diffusivity, which is controversial in existing literature. Such information will be a valuable addition to existing bruise age determination techniques.

  19. On the use of fractional order PK-PD models

    NASA Astrophysics Data System (ADS)

    Ionescu, Clara; Copot, Dana

    2017-01-01

    Quantifying and controlling depth of anesthesia is a challenging process due to lack of measurement technology for direct effects of drug supply into the body. Efforts are being made to develop new sensor techniques and new horizons are explored for modeling this intricate process. This paper introduces emerging tools available on the ‘engineering market’ imported from the area of fractional calculus. A novel interpretation of the classical drug-effect curve is given, enabling linear control. This enables broadening the horizon of signal processing and control techniques and suggests future research lines.

  20. Bladed disc crack diagnostics using blade passage signals

    NASA Astrophysics Data System (ADS)

    Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Koul, Ashok; Liang, Ming; Alavi, Elham

    2012-12-01

    One of the major potential faults in a turbo fan engine is the crack initiation and propagation in bladed discs under cyclic loads that could result in the breakdown of the engines if not detected at an early stage. Reliable fault detection techniques are therefore in demand to reduce maintenance cost and prevent catastrophic failures. Although a number of approaches have been reported in the literature, it remains very challenging to develop a reliable technique to accurately estimate the health condition of a rotating bladed disc. Correspondingly, this paper presents a novel technique for bladed disc crack detection through two sequential signal processing stages: (1) signal preprocessing that aims to eliminate the noises in the blade passage signals; (2) signal postprocessing that intends to identify the crack location. In the first stage, physics-based modeling and interpretation are established to help characterize the noises. The crack initiation can be determined based on the calculated health monitoring index derived from the sinusoidal effects. In the second stage, the crack is located through advanced detrended fluctuation analysis of the preprocessed data. The proposed technique is validated using a set of spin rig test data (i.e. tip clearance and time of arrival) that was acquired during a test conducted on a bladed military engine fan disc. The test results have demonstrated that the developed technique is an effective approach for identifying and locating the incipient crack that occurs at the root of a bladed disc.

  1. The neural basis of functional neuroimaging signal with positron and single-photon emission tomography.

    PubMed

    Sestini, S

    2007-07-01

    Functional imaging techniques such as positron and single-photon emission tomography exploit the relationship between neural activity, energy demand and cerebral blood flow to functionally map the brain. Despite the fact that neurobiological processes are not completely understood, several results have revealed the signals that trigger the metabolic and vascular changes accompanying variations in neural activity. Advances in this field have demonstrated that release of the major excitatory neurotransmitter glutamate initiates diverse signaling processes between neurons, astrocytes and blood perfusion, and that this signaling is crucial for the occurrence of brain imaging signals. Better understanding of the neural sites of energy consumption and the temporal correlation between energy demand, energy consumption and associated cerebrovascular hemodynamics gives novel insight into the potential of these imaging tools in the study of metabolic neurodegenerative disorders.

  2. Thermal Radiometer Signal Processing Using Radiation Hard CMOS Application Specific Integrated Circuits for Use in Harsh Planetary Environments

    NASA Technical Reports Server (NTRS)

    Quilligan, G.; DuMonthier, J.; Aslam, S.; Lakew, B.; Kleyner, I.; Katz, R.

    2015-01-01

    Thermal radiometers such as proposed for the Europa Clipper flyby mission require low noise signal processing for thermal imaging with immunity to Total Ionizing Dose (TID) and Single Event Latchup (SEL). Described is a second generation Multi- Channel Digitizer (MCD2G) Application Specific Integrated Circuit (ASIC) that accurately digitizes up to 40 thermopile pixels with greater than 50 Mrad (Si) immunity TID and 174 MeV-sq cm/mg SEL. The MCD2G ASIC uses Radiation Hardened By Design (RHBD) techniques with a 180 nm CMOS process node.

  3. Thermal Radiometer Signal Processing using Radiation Hard CMOS Application Specific Integrated Circuits for use in Harsh Planetary Environments

    NASA Astrophysics Data System (ADS)

    Quilligan, G.; DuMonthier, J.; Aslam, S.; Lakew, B.; Kleyner, I.; Katz, R.

    2015-10-01

    Thermal radiometers such as proposed for the Europa Clipper flyby mission [1] require low noise signal processing for thermal imaging with immunity to Total Ionizing Dose (TID) and Single Event Latchup (SEL). Described is a second generation Multi- Channel Digitizer (MCD2G) Application Specific Integrated Circuit (ASIC) that accurately digitizes up to 40 thermopile pixels with greater than 50 Mrad (Si) immunity TID and 174 MeV-cm2/mg SEL. The MCD2G ASIC uses Radiation Hardened By Design (RHBD) techniques with a 180 nm CMOS process node.

  4. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    Advances in very large-scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible and potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for a DPCM-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the CODEC are described, and performance results are provided.

  5. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  6. Incorporating signal-dependent noise for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Morman, Christopher J.; Meola, Joseph

    2015-05-01

    The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.

  7. A MUSIC-based method for SSVEP signal processing.

    PubMed

    Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei

    2016-03-01

    The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.

  8. Noiseless intensity amplification of repetitive signals by coherent addition using the temporal Talbot effect

    PubMed Central

    Maram, Reza; Van Howe, James; Li, Ming; Azaña, José

    2014-01-01

    Amplification of signal intensity is essential for initiating physical processes, diagnostics, sensing, communications and measurement. During traditional amplification, the signal is amplified by multiplying the signal carriers through an active gain process, requiring the use of an external power source. In addition, the signal is degraded by noise and distortions that typically accompany active gain processes. We show noiseless intensity amplification of repetitive optical pulse waveforms with gain from 2 to ~20 without using active gain. The proposed method uses a dispersion-induced temporal self-imaging (Talbot) effect to redistribute and coherently accumulate energy of the original repetitive waveforms into fewer replica waveforms. In addition, we show how our passive amplifier performs a real-time average of the wave-train to reduce its original noise fluctuation, as well as enhances the extinction ratio of pulses to stand above the noise floor. Our technique is applicable to repetitive waveforms in any spectral region or wave system. PMID:25319207

  9. Expert system for testing industrial processes and determining sensor status

    DOEpatents

    Gross, K.C.; Singer, R.M.

    1998-06-02

    A method and system are disclosed for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 24 figs.

  10. Expert system for testing industrial processes and determining sensor status

    DOEpatents

    Gross, Kenneth C.; Singer, Ralph M.

    1998-01-01

    A method and system for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.

  11. Zero source insertion technique to account for undersampling in GPR imaging

    DOEpatents

    Chambers, David H; Mast, Jeffrey E; Paglieroni, David W

    2014-02-25

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  12. Estimation of Characteristics of Echo Envelope Using RF Echo Signal from the Liver

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tadashi; Hachiya, Hiroyuki; Kamiyama, Naohisa; Ikeda, Kazuki; Moriyasu, Norifumi

    2001-05-01

    To realize quantitative diagnosis of liver cirrhosis, we have been analyzing the probability density function (PDF) of echo amplitude using B-mode images. However, the B-mode image is affected by the various signal and image processing techniques used in the diagnosis equipment, so a detailed and quantitative analysis is very difficult. In this paper, we analyze the PDF of echo amplitude using RF echo signal and B-mode images of normal and cirrhotic livers, and compare both results to examine the validity of the RF echo signal.

  13. Electrochemical concentration measurements for multianalyte mixtures in simulated electrorefiner salt

    NASA Astrophysics Data System (ADS)

    Rappleye, Devin Spencer

    The development of electroanalytical techniques in multianalyte molten salt mixtures, such as those found in used nuclear fuel electrorefiners, would enable in situ, real-time concentration measurements. Such measurements are beneficial for process monitoring, optimization and control, as well as for international safeguards and nuclear material accountancy. Electroanalytical work in molten salts has been limited to single-analyte mixtures with a few exceptions. This work builds upon the knowledge of molten salt electrochemistry by performing electrochemical measurements on molten eutectic LiCl-KCl salt mixture containing two analytes, developing techniques for quantitatively analyzing the measured signals even with an additional signal from another analyte, correlating signals to concentration and identifying improvements in experimental and analytical methodologies. (Abstract shortened by ProQuest.).

  14. Detection of essential hypertension with physiological signals from wearable devices.

    PubMed

    Ghosh, Arindam; Torres, Juan Manuel Mayor; Danieli, Morena; Riccardi, Giuseppe

    2015-08-01

    Early detection of essential hypertension can support the prevention of cardiovascular disease, a leading cause of death. The traditional method of identification of hypertension involves periodic blood pressure measurement using brachial cuff-based measurement devices. While these devices are non-invasive, they require manual setup for each measurement and they are not suitable for continuous monitoring. Research has shown that physiological signals such as Heart Rate Variability, which is a measure of the cardiac autonomic activity, is correlated with blood pressure. Wearable devices capable of measuring physiological signals such as Heart Rate, Galvanic Skin Response, Skin Temperature have recently become ubiquitous. However, these signals are not accurate and are prone to noise due to different artifacts. In this paper a) we present a data collection protocol for continuous non-invasive monitoring of physiological signals from wearable devices; b) we implement signal processing techniques for signal estimation; c) we explore how the continuous monitoring of these physiological signals can be used to identify hypertensive patients; d) We conduct a pilot study with a group of normotensive and hypertensive patients to test our techniques. We show that physiological signals extracted from wearable devices can distinguish between these two groups with high accuracy.

  15. All-Optical Fibre Networks For Coal Mines

    NASA Astrophysics Data System (ADS)

    Zientkiewicz, Jacek K.

    1987-09-01

    A topic of the paper is fiber-optic integrated network (FOIN) suited to the most hostile environments existing in coal mines. The use of optical fibres for transmission of mine instrumentation data offers the prospects of improved safety and immunity to electromagnetic interference (EMI). The feasibility of optically powered sensors has opened up new opportunities for research into optical signal processing architectures. This article discusses a new fibre-optic sensor network involving a time domain multiplexing(TDM)scheme and optical signal processing techniques. The pros and cons of different FOIN topologies with respect to coal mine applications are considered. The emphasis has been placed on a recently developed all-optical fibre network using spread spectrum code division multiple access (COMA) techniques. The all-optical networks have applications in explosive environments where electrical isolation is required.

  16. Application of time-reversal guided waves to field bridge testing for baseline-free damage diagnosis

    NASA Astrophysics Data System (ADS)

    Kim, S. B.; Sohn, H.

    2006-03-01

    There is ongoing research at Carnegie Mellon University to develop a "baseline-free" nondestructive evaluation technique. The uniqueness of this baseline-free diagnosis lies in that certain types of damage can be identified without direct comparison of test signals with previously stored baseline signals. By relaxing dependency on the past baseline data, false positive indications of damage, which might take place due to varying operational and environmental conditions of in-service structures, can be minimized. This baseline-free diagnosis technique is developed based on the concept of a time reversal process (TRP). According to the TRP, an input signal at an original excitation location can be reconstructed if a response signal obtained from another point is emitted back to the original point after being reversed in a time domain. Damage diagnosis lies in the premise that the time reversibility breaks down when a certain type of defect such as nonlinear damage exists along the wave propagation path. Then, the defect can be sensed by examining a reconstructed signal after the TRP. In this paper, the feasibility of the proposed NDT technique is investigated using actual test data obtained from the Buffalo Creek Bridge in Pennsylvania.

  17. Automatic Speech Recognition from Neural Signals: A Focused Review.

    PubMed

    Herff, Christian; Schultz, Tanja

    2016-01-01

    Speech interfaces have become widely accepted and are nowadays integrated in various real-life applications and devices. They have become a part of our daily life. However, speech interfaces presume the ability to produce intelligible speech, which might be impossible due to either loud environments, bothering bystanders or incapabilities to produce speech (i.e., patients suffering from locked-in syndrome). For these reasons it would be highly desirable to not speak but to simply envision oneself to say words or sentences. Interfaces based on imagined speech would enable fast and natural communication without the need for audible speech and would give a voice to otherwise mute people. This focused review analyzes the potential of different brain imaging techniques to recognize speech from neural signals by applying Automatic Speech Recognition technology. We argue that modalities based on metabolic processes, such as functional Near Infrared Spectroscopy and functional Magnetic Resonance Imaging, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes. In contrast, electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR. Our experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity (electrocorticography). As a first example of Automatic Speech Recognition techniques used from neural signals, we discuss the Brain-to-text system.

  18. Time-frequency representation of a highly nonstationary signal via the modified Wigner distribution

    NASA Technical Reports Server (NTRS)

    Zoladz, T. F.; Jones, J. H.; Jong, J.

    1992-01-01

    A new signal analysis technique called the modified Wigner distribution (MWD) is presented. The new signal processing tool has been very successful in determining time frequency representations of highly non-stationary multicomponent signals in both simulations and trials involving actual Space Shuttle Main Engine (SSME) high frequency data. The MWD departs from the classic Wigner distribution (WD) in that it effectively eliminates the cross coupling among positive frequency components in a multiple component signal. This attribute of the MWD, which prevents the generation of 'phantom' spectral peaks, will undoubtedly increase the utility of the WD for real world signal analysis applications which more often than not involve multicomponent signals.

  19. Leak Detection by Acoustic Emission Monitoring. Phase 1. Feasibility Study

    DTIC Science & Technology

    1994-05-26

    various signal processing and noise descrimInation techniques during the Data Processing task. C. TEST DESCRIPTION 1. Laboratory Tests Following normal...success in applying these methods to descriminating between the AE bursts generated by two close AE sources In a section of an aircraft structure

  20. Performance Comparison of Superresolution Array Processing Algorithms. Revised

    DTIC Science & Technology

    1998-06-15

    plane waves is finite is the MUSIC algorithm [16]. MUSIC , which denotes Multiple Signal Classification, is an extension of the method of Pisarenko [18... MUSIC Is but one member of a class of methods based upon the decomposition of covariance data into eigenvectors and eigenvalues. Such techniques...techniques relative to the classical methods, however, results for MUSIC are included in this report. All of the techniques reviewed have application to

  1. Partial Discharge Spectral Characterization in HF, VHF and UHF Bands Using Particle Swarm Optimization.

    PubMed

    Robles, Guillermo; Fresno, José Manuel; Martínez-Tarifa, Juan Manuel; Ardila-Rey, Jorge Alfredo; Parrado-Hernández, Emilio

    2018-03-01

    The measurement of partial discharge (PD) signals in the radio frequency (RF) range has gained popularity among utilities and specialized monitoring companies in recent years. Unfortunately, in most of the occasions the data are hidden by noise and coupled interferences that hinder their interpretation and renders them useless especially in acquisition systems in the ultra high frequency (UHF) band where the signals of interest are weak. This paper is focused on a method that uses a selective spectral signal characterization to feature each signal, type of partial discharge or interferences/noise, with the power contained in the most representative frequency bands. The technique can be considered as a dimensionality reduction problem where all the energy information contained in the frequency components is condensed in a reduced number of UHF or high frequency (HF) and very high frequency (VHF) bands. In general, dimensionality reduction methods make the interpretation of results a difficult task because the inherent physical nature of the signal is lost in the process. The proposed selective spectral characterization is a preprocessing tool that facilitates further main processing. The starting point is a clustering of signals that could form the core of a PD monitoring system. Therefore, the dimensionality reduction technique should discover the best frequency bands to enhance the affinity between signals in the same cluster and the differences between signals in different clusters. This is done maximizing the minimum Mahalanobis distance between clusters using particle swarm optimization (PSO). The tool is tested with three sets of experimental signals to demonstrate its capabilities in separating noise and PDs with low signal-to-noise ratio and separating different types of partial discharges measured in the UHF and HF/VHF bands.

  2. Signal Detection and Monitoring Based on Longitudinal Healthcare Data

    PubMed Central

    Suling, Marc; Pigeot, Iris

    2012-01-01

    Post-marketing detection and surveillance of potential safety hazards are crucial tasks in pharmacovigilance. To uncover such safety risks, a wide set of techniques has been developed for spontaneous reporting data and, more recently, for longitudinal data. This paper gives a broad overview of the signal detection process and introduces some types of data sources typically used. The most commonly applied signal detection algorithms are presented, covering simple frequentistic methods like the proportional reporting rate or the reporting odds ratio, more advanced Bayesian techniques for spontaneous and longitudinal data, e.g., the Bayesian Confidence Propagation Neural Network or the Multi-item Gamma-Poisson Shrinker and methods developed for longitudinal data only, like the IC temporal pattern detection. Additionally, the problem of adjustment for underlying confounding is discussed and the most common strategies to automatically identify false-positive signals are addressed. A drug monitoring technique based on Wald’s sequential probability ratio test is presented. For each method, a real-life application is given, and a wide set of literature for further reading is referenced. PMID:24300373

  3. Acoustic emission signal processing for rolling bearing running state assessment using compressive sensing

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Wu, Xing; Mao, Jianlin; Liu, Xiaoqin

    2017-07-01

    In the signal processing domain, there has been growing interest in using acoustic emission (AE) signals for the fault diagnosis and condition assessment instead of vibration signals, which has been advocated as an effective technique for identifying fracture, crack or damage. The AE signal has high frequencies up to several MHz which can avoid some signals interference, such as the parts of bearing (i.e. rolling elements, ring and so on) and other rotating parts of machine. However, acoustic emission signal necessitates advanced signal sampling capabilities and requests ability to deal with large amounts of sampling data. In this paper, compressive sensing (CS) is introduced as a processing framework, and then a compressive features extraction method is proposed. We use it for extracting the compressive features from compressively-sensed data directly, and also prove the energy preservation properties. First, we study the AE signals under the CS framework. The sparsity of AE signal of the rolling bearing is checked. The observation and reconstruction of signal is also studied. Second, we present a method of extraction AE compressive feature (AECF) from compressively-sensed data directly. We demonstrate the energy preservation properties and the processing of the extracted AECF feature. We assess the running state of the bearing using the AECF trend. The AECF trend of the running state of rolling bearings is consistent with the trend of traditional features. Thus, the method is an effective way to evaluate the running trend of rolling bearings. The results of the experiments have verified that the signal processing and the condition assessment based on AECF is simpler, the amount of data required is smaller, and the amount of computation is greatly reduced.

  4. The Application of Coherent Local Time for Optical Time Transfer and the Quantification of Systematic Errors in Satellite Laser Ranging

    NASA Astrophysics Data System (ADS)

    Schreiber, K. Ulrich; Kodet, Jan

    2018-02-01

    Highly precise time and stable reference frequencies are fundamental requirements for space geodesy. Satellite laser ranging (SLR) is one of these techniques, which differs from all other applications like Very Long Baseline Interferometry (VLBI), Global Navigation Satellite Systems (GNSS) and finally Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) by the fact that it is an optical two-way measurement technique. That means that there is no need for a clock synchronization process between both ends of the distance covered by the measurement technique. Under the assumption of isotropy for the speed of light, SLR establishes the only practical realization of the Einstein Synchronization process so far. Therefore it is a powerful time transfer technique. However, in order to transfer time between two remote clocks, it is also necessary to tightly control all possible signal delays in the ranging process. This paper discusses the role of time and frequency in SLR as well as the error sources before it address the transfer of time between ground and space. The need of an improved signal delay control led to a major redesign of the local time and frequency distribution at the Geodetic Observatory Wettzell. Closure measurements can now be used to identify and remove systematic errors in SLR measurements.

  5. Optimal space communications techniques. [using digital and phase locked systems for signal processing

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1974-01-01

    Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.

  6. Fiber Fabry-Perot interferometer sensor for measuring resonances of piezoelectric elements

    NASA Astrophysics Data System (ADS)

    da Silva, Ricardo E.; Oliveira, Roberson A.; Pohl, Alexandre A. P.

    2011-05-01

    The development of a fiber extrinsic Fabry-Perot interferometer for measuring vibration amplitude and resonances of piezoelectric elements is reported. The signal demodulation method based on the use of an optical spectrum analyzer allows the measurement of displacements and resonances with high resolution. The technique consists basically in monitoring changes in the intensity or the wavelength of a single interferometric fringe at a point of high sensitivity in the sensor response curve. For sensor calibration, three signal processing techniques were employed. Vibration amplitude measurement with 0.84 nm/V sensitivity and the characterization of the piezo resonance is demonstrated.

  7. Innovative signal processing for Johnson Noise thermometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezell, N. Dianne Bull; Britton, Jr, Charles L.; Roberts, Michael

    This report summarizes the newly developed algorithm that subtracted the Electromagnetic Interference (EMI). The EMI performance is very important to this measurement because any interference in the form on pickup from external signal sources from such as fluorescent lighting ballasts, motors, etc. can skew the measurement. Two methods of removing EMI were developed and tested at various locations. This report also summarizes the testing performed at different facilities outside Oak Ridge National Laboratory using both EMI removal techniques. The first EMI removal technique reviewed in previous milestone reports and therefore this report will detail the second method.

  8. Energy Measurement Studies for CO2 Measurement with a Coherent Doppler Lidar System

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Vanvalkenburg, Randal L.; Yu, Jirong; Singh, Upendra N.; Kavaya, Michael J.

    2010-01-01

    The accurate measurement of energy in the application of lidar system for CO2 measurement is critical. Different techniques of energy estimation in the online and offline pulses are investigated for post processing of lidar returns. The cornerstone of the techniques is the accurate estimation of the spectrum of lidar signal and background noise. Since the background noise is not the ideal white Gaussian noise, simple average level estimation of noise level is not well fit in the energy estimation of lidar signal and noise. A brief review of the methods is presented in this paper.

  9. An optimal filter for short photoplethysmogram signals

    PubMed Central

    Liang, Yongbo; Elgendi, Mohamed; Chen, Zhencheng; Ward, Rabab

    2018-01-01

    A photoplethysmogram (PPG) contains a wealth of cardiovascular system information, and with the development of wearable technology, it has become the basic technique for evaluating cardiovascular health and detecting diseases. However, due to the varying environments in which wearable devices are used and, consequently, their varying susceptibility to noise interference, effective processing of PPG signals is challenging. Thus, the aim of this study was to determine the optimal filter and filter order to be used for PPG signal processing to make the systolic and diastolic waves more salient in the filtered PPG signal using the skewness quality index. Nine types of filters with 10 different orders were used to filter 219 (2.1s) short PPG signals. The signals were divided into three categories by PPG experts according to their noise levels: excellent, acceptable, or unfit. Results show that the Chebyshev II filter can improve the PPG signal quality more effectively than other types of filters and that the optimal order for the Chebyshev II filter is the 4th order. PMID:29714722

  10. Focus issue: teaching tools and learning opportunities.

    PubMed

    Gough, Nancy R

    2010-04-27

    Science Signaling provides authoring experience for students and resources for educators. Students experience the writing and revision process involved in authoring short commentary articles that are published in the Journal Club section. By publishing peer-reviewed teaching materials, Science Signaling provides instructors with feedback that improves their materials and an outlet to share their tips and techniques and digital resources with other teachers.

  11. Model-based ultrasound temperature visualization during and following HIFU exposure.

    PubMed

    Ye, Guoliang; Smith, Penny Probert; Noble, J Alison

    2010-02-01

    This paper describes the application of signal processing techniques to improve the robustness of ultrasound feedback for displaying changes in temperature distribution in treatment using high-intensity focused ultrasound (HIFU), especially at the low signal-to-noise ratios that might be expected in in vivo abdominal treatment. Temperature estimation is based on the local displacements in ultrasound images taken during HIFU treatment, and a method to improve robustness to outliers is introduced. The main contribution of the paper is in the application of a Kalman filter, a statistical signal processing technique, which uses a simple analytical temperature model of heat dispersion to improve the temperature estimation from the ultrasound measurements during and after HIFU exposure. To reduce the sensitivity of the method to previous assumptions on the material homogeneity and signal-to-noise ratio, an adaptive form is introduced. The method is illustrated using data from HIFU exposure of ex vivo bovine liver. A particular advantage of the stability it introduces is that the temperature can be visualized not only in the intervals between HIFU exposure but also, for some configurations, during the exposure itself. 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. The R-package eseis - A toolbox to weld geomorphic, seismologic, spatial, and time series analysis

    NASA Astrophysics Data System (ADS)

    Dietze, Michael

    2017-04-01

    Environmental seismology is the science of investigating the seismic signals that are emitted by Earth surface processes. This emerging field provides unique opportunities to identify, locate, track and inspect a wide range of the processes that shape our planet. Modern broadband seismometers are sensitive enough to detect signals from sources as weak as wind interacting with the ground and as powerful as collapsing mountains. This places the field of environmental seismology at the seams of many geoscientific disciplines and requires integration of a series of specialised analysis techniques. R provides the perfect environment for this challenge. The package eseis uses the foundations laid by a series of existing packages and data types tailored to solve specialised problems (e.g., signal, sp, rgdal, Rcpp, matrixStats) and thus provides access to efficiently handling large streams of seismic data (> 300 million samples per station and day). It supports standard data formats (mseed, sac), preparation techniques (deconvolution, filtering, rotation), processing methods (spectra, spectrograms, event picking, migration for localisation) and data visualisation. Thus, eseis provides a seamless approach to the entire workflow of environmental seismology and passes the output to related analysis fields with temporal, spatial and modelling focus in R.

  13. Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing.

    PubMed

    Ölçer, İbrahim; Öncü, Ahmet

    2017-06-05

    Distributed vibration sensing based on phase-sensitive optical time domain reflectometry ( ϕ -OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ -OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ -OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems.

  14. Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing

    PubMed Central

    Ölçer, İbrahim; Öncü, Ahmet

    2017-01-01

    Distributed vibration sensing based on phase-sensitive optical time domain reflectometry (ϕ-OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ-OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ-OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems. PMID:28587240

  15. Seismic and acoustic signal identification algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LADD,MARK D.; ALAM,M. KATHLEEN; SLEEFE,GERARD E.

    2000-04-03

    This paper will describe an algorithm for detecting and classifying seismic and acoustic signals for unattended ground sensors. The algorithm must be computationally efficient and continuously process a data stream in order to establish whether or not a desired signal has changed state (turned-on or off). The paper will focus on describing a Fourier based technique that compares the running power spectral density estimate of the data to a predetermined signature in order to determine if the desired signal has changed state. How to establish the signature and the detection thresholds will be discussed as well as the theoretical statisticsmore » of the algorithm for the Gaussian noise case with results from simulated data. Actual seismic data results will also be discussed along with techniques used to reduce false alarms due to the inherent nonstationary noise environments found with actual data.« less

  16. Synchronization trigger control system for flow visualization

    NASA Technical Reports Server (NTRS)

    Chun, K. S.

    1987-01-01

    The use of cinematography or holographic interferometry for dynamic flow visualization in an internal combustion engine requires a control device that globally synchronizes camera and light source timing at a predefined shaft encoder angle. The device is capable of 0.35 deg resolution for rotational speeds of up to 73 240 rpm. This was achieved by implementing the shaft encoder signal addressed look-up table (LUT) and appropriate latches. The developed digital signal processing technique achieves 25 nsec of high speed triggering angle detection by using direct parallel bit comparison of the shaft encoder digital code with a simulated angle reference code, instead of using angle value comparison which involves more complicated computation steps. In order to establish synchronization to an AC reference signal whose magnitude is variant with the rotating speed, a dynamic peak followup synchronization technique has been devised. This method scrutinizes the reference signal and provides the right timing within 40 nsec. Two application examples are described.

  17. Photonics-based real-time ultra-high-range-resolution radar with broadband signal generation and processing.

    PubMed

    Zhang, Fangzheng; Guo, Qingshui; Pan, Shilong

    2017-10-23

    Real-time and high-resolution target detection is highly desirable in modern radar applications. Electronic techniques have encountered grave difficulties in the development of such radars, which strictly rely on a large instantaneous bandwidth. In this article, a photonics-based real-time high-range-resolution radar is proposed with optical generation and processing of broadband linear frequency modulation (LFM) signals. A broadband LFM signal is generated in the transmitter by photonic frequency quadrupling, and the received echo is de-chirped to a low frequency signal by photonic frequency mixing. The system can operate at a high frequency and a large bandwidth while enabling real-time processing by low-speed analog-to-digital conversion and digital signal processing. A conceptual radar is established. Real-time processing of an 8-GHz LFM signal is achieved with a sampling rate of 500 MSa/s. Accurate distance measurement is implemented with a maximum error of 4 mm within a range of ~3.5 meters. Detection of two targets is demonstrated with a range-resolution as high as 1.875 cm. We believe the proposed radar architecture is a reliable solution to overcome the limitations of current radar on operation bandwidth and processing speed, and it is hopefully to be used in future radars for real-time and high-resolution target detection and imaging.

  18. Broadband true time delay for microwave signal processing, using slow light based on stimulated Brillouin scattering in optical fibers.

    PubMed

    Chin, Sanghoon; Thévenaz, Luc; Sancho, Juan; Sales, Salvador; Capmany, José; Berger, Perrine; Bourderionnet, Jérôme; Dolfi, Daniel

    2010-10-11

    We experimentally demonstrate a novel technique to process broadband microwave signals, using all-optically tunable true time delay in optical fibers. The configuration to achieve true time delay basically consists of two main stages: photonic RF phase shifter and slow light, based on stimulated Brillouin scattering in fibers. Dispersion properties of fibers are controlled, separately at optical carrier frequency and in the vicinity of microwave signal bandwidth. This way time delay induced within the signal bandwidth can be manipulated to correctly act as true time delay with a proper phase compensation introduced to the optical carrier. We completely analyzed the generated true time delay as a promising solution to feed phased array antenna for radar systems and to develop dynamically reconfigurable microwave photonic filters.

  19. Intelligent approach for analysis of respiratory signals and oxygen saturation in the sleep apnea/hypopnea syndrome.

    PubMed

    Moret-Bonillo, Vicente; Alvarez-Estévez, Diego; Fernández-Leal, Angel; Hernández-Pereira, Elena

    2014-01-01

    This work deals with the development of an intelligent approach for clinical decision making in the diagnosis of the Sleep Apnea/Hypopnea Syndrome, SAHS, from the analysis of respiratory signals and oxygen saturation in arterial blood, SaO2. In order to accomplish the task the proposed approach makes use of different artificial intelligence techniques and reasoning processes being able to deal with imprecise data. These reasoning processes are based on fuzzy logic and on temporal analysis of the information. The developed approach also takes into account the possibility of artifacts in the monitored signals. Detection and characterization of signal artifacts allows detection of false positives. Identification of relevant diagnostic patterns and temporal correlation of events is performed through the implementation of temporal constraints.

  20. Improved signal recovery for flow cytometry based on ‘spatially modulated emission’

    NASA Astrophysics Data System (ADS)

    Quint, S.; Wittek, J.; Spang, P.; Levanon, N.; Walther, T.; Baßler, M.

    2017-09-01

    Recently, the technique of ‘spatially modulated emission’ has been introduced (Baßler et al 2008 US Patent 0080181827A1; Kiesel et al 2009 Appl. Phys. Lett. 94 041107; Kiesel et al 2011 Cytometry A 79A 317-24) improving the signal-to-noise ratio (SNR) for detecting bio-particles in the field of flow cytometry. Based on this concept, we developed two advanced signal processing methods which further enhance the SNR and selectivity for cell detection. The improvements are achieved by adapting digital filtering methods from RADAR technology and mainly address inherent offset elimination, increased signal dynamics and moreover reduction of erroneous detections due to processing artifacts. We present a comprehensive theory on SNR gain and provide experimental results of our concepts.

  1. Intelligent Approach for Analysis of Respiratory Signals and Oxygen Saturation in the Sleep Apnea/Hypopnea Syndrome

    PubMed Central

    Moret-Bonillo, Vicente; Alvarez-Estévez, Diego; Fernández-Leal, Angel; Hernández-Pereira, Elena

    2014-01-01

    This work deals with the development of an intelligent approach for clinical decision making in the diagnosis of the Sleep Apnea/Hypopnea Syndrome, SAHS, from the analysis of respiratory signals and oxygen saturation in arterial blood, SaO2. In order to accomplish the task the proposed approach makes use of different artificial intelligence techniques and reasoning processes being able to deal with imprecise data. These reasoning processes are based on fuzzy logic and on temporal analysis of the information. The developed approach also takes into account the possibility of artifacts in the monitored signals. Detection and characterization of signal artifacts allows detection of false positives. Identification of relevant diagnostic patterns and temporal correlation of events is performed through the implementation of temporal constraints. PMID:25035712

  2. Wnt/β-catenin signaling pathway inhibits the proliferation and apoptosis of U87 glioma cells via different mechanisms

    PubMed Central

    Gao, Liyang; Chen, Bing; Li, Jinhong; Yang, Fan; Cen, Xuecheng; Liao, Zhuangbing; Long, Xiao’ao

    2017-01-01

    The Wnt signaling pathway is necessary for the development of the central nervous system and is associated with tumorigenesis in various cancers. However, the mechanism of the Wnt signaling pathway in glioma cells has yet to be elucidated. Small-molecule Wnt modulators such as ICG-001 and AZD2858 were used to inhibit and stimulate the Wnt/β-catenin signaling pathway. Techniques including cell proliferation assay, colony formation assay, Matrigel cell invasion assay, cell cycle assay and Genechip microarray were used. Gene Ontology Enrichment Analysis and Gene Set Enrichment Analysis have enriched many biological processes and signaling pathways. Both the inhibiting and stimulating Wnt/β-catenin signaling pathways could influence the cell cycle, moreover, reduce the proliferation and survival of U87 glioma cells. However, Affymetrix expression microarray indicated that biological processes and networks of signaling pathways between stimulating and inhibiting the Wnt/β-catenin signaling pathway largely differ. We propose that Wnt/β-catenin signaling pathway might prove to be a valuable therapeutic target for glioma. PMID:28837560

  3. Guided wave technique for non-destructive testing of StifPipe

    NASA Astrophysics Data System (ADS)

    Amjad, Umar; Yadav, Susheel K.; Nguyen, Chi H.; Ehsani, Mohammad; Kundu, Tribikram

    2015-03-01

    The newly-developed StifPipe® is an effective technology for repair and strengthening of existing pipes and culverts. The wall of this pipe consists of a lightweight honeycomb core with carbon or glass fiber reinforced polymer (FRP) applied to the skin. The presence of the hollow honeycomb introduces challenges in the nondestructive testing (NDT) of this pipe. In this study, it is investigated if guided waves, excited by PZT (Lead ZirconateTitanate) transducer can detect damages in the honeycomb layer of the StifPipe®. Multiple signal processing techniques are used for in-depth study and understanding of the recorded signals. The experimental technique for damage detection in StifPipe® material is described and the obtained results are presented in this paper.

  4. Automatic welding detection by an intelligent tool pipe inspection

    NASA Astrophysics Data System (ADS)

    Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.

    2015-07-01

    This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.

  5. Design of a submillimeter laser Thomson scattering system for measurement of ion temperature in SUMMA

    NASA Technical Reports Server (NTRS)

    Praddaude, H. C.; Woskoboinikow, P.

    1978-01-01

    A thorough discussion of submillimeter laser Thomson scattering for the measurement of ion temperature in plasmas is presented. This technique is very promising and work is being actively pursued on the high power lasers and receivers necessary for its implementation. In this report we perform an overall system analysis of the Thomson scattering technique aimed to: (1) identify problem areas; (2) establish specifications for the main components of the apparatus; (3) study signal processing alternatives and identify the optimum signal handling procedure. Because of its importance for the successful implementation of this technique, we also review the work presently being carried out on the optically pumped submillimeter CH3F and D2O lasers.

  6. Investigation of FPGA-Based Real-Time Adaptive Digital Pulse Shaping for High-Count-Rate Applications

    NASA Astrophysics Data System (ADS)

    Saxena, Shefali; Hawari, Ayman I.

    2017-07-01

    Digital signal processing techniques have been widely used in radiation spectrometry to provide improved stability and performance with compact physical size over the traditional analog signal processing. In this paper, field-programmable gate array (FPGA)-based adaptive digital pulse shaping techniques are investigated for real-time signal processing. National Instruments (NI) NI 5761 14-bit, 250-MS/s adaptor module is used for digitizing high-purity germanium (HPGe) detector's preamplifier pulses. Digital pulse processing algorithms are implemented on the NI PXIe-7975R reconfigurable FPGA (Kintex-7) using the LabVIEW FPGA module. Based on the time separation between successive input pulses, the adaptive shaping algorithm selects the optimum shaping parameters (rise time and flattop time of trapezoid-shaping filter) for each incoming signal. A digital Sallen-Key low-pass filter is implemented to enhance signal-to-noise ratio and reduce baseline drifting in trapezoid shaping. A recursive trapezoid-shaping filter algorithm is employed for pole-zero compensation of exponentially decayed (with two-decay constants) preamplifier pulses of an HPGe detector. It allows extraction of pulse height information at the beginning of each pulse, thereby reducing the pulse pileup and increasing throughput. The algorithms for RC-CR2 timing filter, baseline restoration, pile-up rejection, and pulse height determination are digitally implemented for radiation spectroscopy. Traditionally, at high-count-rate conditions, a shorter shaping time is preferred to achieve high throughput, which deteriorates energy resolution. In this paper, experimental results are presented for varying count-rate and pulse shaping conditions. Using adaptive shaping, increased throughput is accepted while preserving the energy resolution observed using the longer shaping times.

  7. Technique minimizes the effects of dropouts on telemetry records

    NASA Technical Reports Server (NTRS)

    Anderson, T. O.; Hurd, W. J.

    1972-01-01

    Recorder deficiencies are minimized by using two-channel system to prepare two tapes, each having noise, wow and flutter, and dropout characteristics of channel on which it was made. Processing tapes by computer and combining signals from two channels produce single tape free of dropouts caused by recording process.

  8. Ultra-low-power and robust digital-signal-processing hardware for implantable neural interface microsystems.

    PubMed

    Narasimhan, S; Chiel, H J; Bhunia, S

    2011-04-01

    Implantable microsystems for monitoring or manipulating brain activity typically require on-chip real-time processing of multichannel neural data using ultra low-power, miniaturized electronics. In this paper, we propose an integrated-circuit/architecture-level hardware design framework for neural signal processing that exploits the nature of the signal-processing algorithm. First, we consider different power reduction techniques and compare the energy efficiency between the ultra-low frequency subthreshold and conventional superthreshold design. We show that the superthreshold design operating at a much higher frequency can achieve comparable energy dissipation by taking advantage of extensive power gating. It also provides significantly higher robustness of operation and yield under large process variations. Next, we propose an architecture level preferential design approach for further energy reduction by isolating the critical computation blocks (with respect to the quality of the output signal) and assigning them higher delay margins compared to the noncritical ones. Possible delay failures under parameter variations are confined to the noncritical components, allowing graceful degradation in quality under voltage scaling. Simulation results using prerecorded neural data from the sea-slug (Aplysia californica) show that the application of the proposed design approach can lead to significant improvement in total energy, without compromising the output signal quality under process variations, compared to conventional design approaches.

  9. A Wearable Real-Time and Non-Invasive Thoracic Cavity Monitoring System

    NASA Astrophysics Data System (ADS)

    Salman, Safa

    A surgery-free on-body monitoring system is proposed to evaluate the dielectric constant of internal body tissues (especially lung and heart) and effectively determine irregularities in real-time. The proposed surgery-free on-body monitoring system includes a sensor, a post-processing technique, and an automated data collection circuit. Data are automatically collected from the sensor electrodes and then post processed to extract the electrical properties of the underlying biological tissue(s). To demonstrate the imaging concept, planar and wrap-around sensors are devised. These sensors are designed to detect changes in the dielectric constant of inner tissues (lung and heart). The planar sensor focuses on a single organ while the wrap-around sensors allows for imaging of the thoracic cavity's cross section. Moreover, post-processing techniques are proposed to complement sensors for a more complete on-body monitoring system. The idea behind the post-processing technique is to suppress interference from the outer layers (skin, fat, muscle, and bone). The sensors and post-processing techniques yield high signal (from the inner layers) to noise (from the outer layers) ratio. Additionally, data collection circuits are proposed for a more robust and stand-alone system. The circuit design aims to sequentially activate each port of the sensor and portions of the propagating signal are to be received at all passive ports in the form of a voltage at the probes. The voltages are converted to scattering parameters which are then used in the post-processing technique to obtain epsilonr. The concept of wearability is also considered through the use of electrically conductive fibers (E-fibers). These fibers show matching performance to that of copper, especially at low frequencies making them a viable substitute. For the cases considered, the proposed sensors show promising results in recovering the permittivity of deep tissues with a maximum error of 13.5%. These sensors provide a way for a new class of medical sensors through accuracy improvements and avoidance of inverse scattering techniques.

  10. Multi-filter spectrophotometry of quasar environments

    NASA Technical Reports Server (NTRS)

    Craven, Sally E.; Hickson, Paul; Yee, Howard K. C.

    1993-01-01

    A many-filter photometric technique for determining redshifts and morphological types, by fitting spectral templates to spectral energy distributions, has good potential for application in surveys. Despite success in studies performed on simulated data, the results have not been fully reliable when applied to real, low signal-to-noise data. We are investigating techniques to improve the fitting process.

  11. Advanced digital signal processing for short haul optical fiber transmission beyond 100G

    NASA Astrophysics Data System (ADS)

    Kikuchi, Nobuhiko

    2017-01-01

    Significant increase of intra and inter data center traffic has been expected by the rapid spread of various network applications like SNS, IoT, mobile and cloud computing, and the needs for ultra-high speed and cost-effective short- to medium-reach optical fiber links beyond 100-Gbit/s is becoming larger and larger. Such high-speed links typically use multilevel modulation to lower signaling speed, which in turn face serious challenges in limited loss budget and waveform distortion tolerance. One of the promising techniques to overcome them is the use of advanced digital signal processing (DSP) and we review various DSP applications for short-to-medium reach applications.

  12. Floating-point scaling technique for sources separation automatic gain control

    NASA Astrophysics Data System (ADS)

    Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.

    2012-07-01

    Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.

  13. Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.

    PubMed

    Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H

    2011-06-06

    We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.

  14. SU-D-207-01: Markerless Respiratory Motion Tracking with Contrast Enhanced Thoracic Cone Beam CT Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, M; Yuan, Y; Rosenzweig, K

    2015-06-15

    Purpose: To develop a novel technique to enhance the image contrast of clinical cone beam CT projections and extract respiratory signals based on anatomical motion using the modified Amsterdam Shroud (AS) method to benefit image guided radiation therapy. Methods: Thoracic cone beam CT projections acquired prior to treatment were preprocessed to increase their contrast for better respiratory signal extraction. Air intensity on raw images was firstly estimated and then applied to correct the projections to generate new attenuation images that were subsequently improved with deeper anatomy feature enhancement through taking logarithm operation, derivative along superior-inferior direction, respectively. All pixels onmore » individual post-processed two dimensional images were horizontally summed to one column and all projections were combined side by side to create an AS image from which patient’s respiratory signal was extracted. The impact of gantry rotation on the breathing signal rendering was also investigated. Ten projection image sets from five lung cancer patients acquired with the Varian Onboard Imager on 21iX Clinac (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Results: Application of the air correction on raw projections showed that more than an order of magnitude of contrast enhancement was achievable. The typical contrast on the raw projections is around 0.02 while that on attenuation images could greater than 0.5. Clear and stable breathing signal can be reliably extracted from the new images while the uncorrected projection sets failed to yield clear signals most of the time. Conclusion: Anatomy feature plays a key role in yielding breathing signal from the projection images using the AS technique. The air correction process facilitated the contrast enhancement significantly and attenuation images thus obtained provides a practical solution to obtaining markerless breathing motion tracking.« less

  15. Investigation of digital encoding techniques for television transmission

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1983-01-01

    Composite color television signals are sampled at four times the color subcarrier and transformed using intraframe two dimensional Walsh functions. It is shown that by properly sampling a composite color signal and employing a Walsh transform the YIQ time signals which sum to produce the composite color signal can be represented, in the transform domain, by three component signals in space. By suitably zonal quantizing the transform coefficients, the YIQ signals can be processed independently to achieve data compression and obtain the same results as component coding. Computer simulations of three bandwidth compressors operating at 1.09, 1.53 and 1.8 bits/ sample are presented. The above results can also be applied to the PAL color system.

  16. Graphical Environment Tools for Application to Gamma-Ray Energy Tracking Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, Richard A.; Radford, David C.

    2013-12-30

    Highly segmented, position-sensitive germanium detector systems are being developed for nuclear physics research where traditional electronic signal processing with mixed analog and digital function blocks would be enormously complex and costly. Future systems will be constructed using pipelined processing of high-speed digitized signals as is done in the telecommunications industry. Techniques which provide rapid algorithm and system development for future systems are desirable. This project has used digital signal processing concepts and existing graphical system design tools to develop a set of re-usable modular functions and libraries targeted for the nuclear physics community. Researchers working with complex nuclear detector arraysmore » such as the Gamma-Ray Energy Tracking Array (GRETA) have been able to construct advanced data processing algorithms for implementation in field programmable gate arrays (FPGAs) through application of these library functions using intuitive graphical interfaces.« less

  17. System and technique for ultrasonic determination of degree of cooking

    DOEpatents

    Bond, Leonard J [Richland, WA; Diaz, Aaron A [W. Richland, WA; Judd, Kayte M [Richland, WA; Pappas, Richard A [Richland, WA; Cliff, William C [Richland, WA; Pfund, David M [Richland, WA; Morgen, Gerald P [Kennewick, WA

    2007-03-20

    A method and apparatus are described for determining the doneness of food during a cooking process. Ultrasonic signal are passed through the food during cooking. The change in transmission characteristics of the ultrasonic signal during the cooking process is measured to determine the point at which the food has been cooked to the proper level. In one aspect, a heated fluid cooks the food, and the transmission characteristics along a fluid-only ultrasonic path provides a reference for comparison with the transmission characteristics for a food-fluid ultrasonic path.

  18. Improved EEG Event Classification Using Differential Energy.

    PubMed

    Harati, A; Golmohammadi, M; Lopez, S; Obeid, I; Picone, J

    2015-12-01

    Feature extraction for automatic classification of EEG signals typically relies on time frequency representations of the signal. Techniques such as cepstral-based filter banks or wavelets are popular analysis techniques in many signal processing applications including EEG classification. In this paper, we present a comparison of a variety of approaches to estimating and postprocessing features. To further aid in discrimination of periodic signals from aperiodic signals, we add a differential energy term. We evaluate our approaches on the TUH EEG Corpus, which is the largest publicly available EEG corpus and an exceedingly challenging task due to the clinical nature of the data. We demonstrate that a variant of a standard filter bank-based approach, coupled with first and second derivatives, provides a substantial reduction in the overall error rate. The combination of differential energy and derivatives produces a 24 % absolute reduction in the error rate and improves our ability to discriminate between signal events and background noise. This relatively simple approach proves to be comparable to other popular feature extraction approaches such as wavelets, but is much more computationally efficient.

  19. Nonlinear ultrasonic wave modulation for online fatigue crack detection

    NASA Astrophysics Data System (ADS)

    Sohn, Hoon; Lim, Hyung Jin; DeSimio, Martin P.; Brown, Kevin; Derriso, Mark

    2014-02-01

    This study presents a fatigue crack detection technique using nonlinear ultrasonic wave modulation. Ultrasonic waves at two distinctive driving frequencies are generated and corresponding ultrasonic responses are measured using permanently installed lead zirconate titanate (PZT) transducers with a potential for continuous monitoring. Here, the input signal at the lower driving frequency is often referred to as a 'pumping' signal, and the higher frequency input is referred to as a 'probing' signal. The presence of a system nonlinearity, such as a crack formation, can provide a mechanism for nonlinear wave modulation, and create spectral sidebands around the frequency of the probing signal. A signal processing technique combining linear response subtraction (LRS) and synchronous demodulation (SD) is developed specifically to extract the crack-induced spectral sidebands. The proposed crack detection method is successfully applied to identify actual fatigue cracks grown in metallic plate and complex fitting-lug specimens. Finally, the effect of pumping and probing frequencies on the amplitude of the first spectral sideband is investigated using the first sideband spectrogram (FSS) obtained by sweeping both pumping and probing signals over specified frequency ranges.

  20. Timeseries Signal Processing for Enhancing Mobile Surveys: Learning from Field Studies

    NASA Astrophysics Data System (ADS)

    Risk, D. A.; Lavoie, M.; Marshall, A. D.; Baillie, J.; Atherton, E. E.; Laybolt, W. D.

    2015-12-01

    Vehicle-based surveys using laser and other analyzers are now commonplace in research and industry. In many cases when these studies target biologically-relevant gases like methane and carbon dioxide, the minimum detection limits are often coarse (ppm) relative to the analyzer's capabilities (ppb), because of the inherent variability in the ambient background concentrations across the landscape that creates noise and uncertainty. This variation arises from localized biological sinks and sources, but also atmospheric turbulence, air pooling, and other factors. Computational processing routines are widely used in many fields to increase resolution of a target signal in temporally dense data, and offer promise for enhancing mobile surveying techniques. Signal processing routines can both help identify anomalies at very low levels, or can be used inversely to remove localized industrially-emitted anomalies from ecological data. This presentation integrates learnings from various studies in which simple signal processing routines were used successfully to isolate different temporally-varying components of 1 Hz timeseries measured with laser- and UV fluorescence-based analyzers. As illustrative datasets, we present results from industrial fugitive emission studies from across Canada's western provinces and other locations, and also an ecological study that aimed to model near-surface concentration variability across different biomes within eastern Canada. In these cases, signal processing algorithms contributed significantly to the clarity of both industrial, and ecological processes. In some instances, signal processing was too computationally intensive for real-time in-vehicle processing, but we identified workarounds for analyzer-embedded software that contributed to an improvement in real-time resolution of small anomalies. Signal processing is a natural accompaniment to these datasets, and many avenues are open to researchers who wish to enhance existing, and future datasets.

  1. Gaussian Process Kalman Filter for Focal Plane Wavefront Correction and Exoplanet Signal Extraction

    NASA Astrophysics Data System (ADS)

    Sun, He; Kasdin, N. Jeremy

    2018-01-01

    Currently, the ultimate limitation of space-based coronagraphy is the ability to subtract the residual PSF after wavefront correction to reveal the planet. Called reference difference imaging (RDI), the technique consists of conducting wavefront control to collect the reference point spread function (PSF) by observing a bright star, and then extracting target planet signals by subtracting a weighted sum of reference PSFs. Unfortunately, this technique is inherently inefficient because it spends a significant fraction of the observing time on the reference star rather than the target star with the planet. Recent progress in model based wavefront estimation suggests an alternative approach. A Kalman filter can be used to estimate the stellar PSF for correction by the wavefront control system while simultaneously estimating the planet signal. Without observing the reference star, the (extended) Kalman filter directly utilizes the wavefront correction data and combines the time series observations and model predictions to estimate the stellar PSF and planet signals. Because wavefront correction is used during the entire observation with no slewing, the system has inherently better stability. In this poster we show our results aimed at further improving our Kalman filter estimation accuracy by including not only temporal correlations but also spatial correlations among neighboring pixels in the images. This technique is known as a Gaussian process Kalman filter (GPKF). We also demonstrate the advantages of using a Kalman filter rather than RDI by simulating a real space exoplanet detection mission.

  2. Multimodal and Multi-tissue Measures of Connectivity Revealed by Joint Independent Component Analysis.

    PubMed

    Franco, Alexandre R; Ling, Josef; Caprihan, Arvind; Calhoun, Vince D; Jung, Rex E; Heileman, Gregory L; Mayer, Andrew R

    2008-12-01

    The human brain functions as an efficient system where signals arising from gray matter are transported via white matter tracts to other regions of the brain to facilitate human behavior. However, with a few exceptions, functional and structural neuroimaging data are typically optimized to maximize the quantification of signals arising from a single source. For example, functional magnetic resonance imaging (FMRI) is typically used as an index of gray matter functioning whereas diffusion tensor imaging (DTI) is typically used to determine white matter properties. While it is likely that these signals arising from different tissue sources contain complementary information, the signal processing algorithms necessary for the fusion of neuroimaging data across imaging modalities are still in a nascent stage. In the current paper we present a data-driven method for combining measures of functional connectivity arising from gray matter sources (FMRI resting state data) with different measures of white matter connectivity (DTI). Specifically, a joint independent component analysis (J-ICA) was used to combine these measures of functional connectivity following intensive signal processing and feature extraction within each of the individual modalities. Our results indicate that one of the most predominantly used measures of functional connectivity (activity in the default mode network) is highly dependent on the integrity of white matter connections between the two hemispheres (corpus callosum) and within the cingulate bundles. Importantly, the discovery of this complex relationship of connectivity was entirely facilitated by the signal processing and fusion techniques presented herein and could not have been revealed through separate analyses of both data types as is typically performed in the majority of neuroimaging experiments. We conclude by discussing future applications of this technique to other areas of neuroimaging and examining potential limitations of the methods.

  3. Performance Analysis of a Hardware Implemented Complex Signal Kurtosis Radio-Frequency Interference Detector

    NASA Technical Reports Server (NTRS)

    Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark

    2016-01-01

    Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves. The complex kurtosis algorithm has the potential to reduce data rate due to onboard processing in addition to improving RFI detection performance.

  4. Application of Ensemble Detection and Analysis to Modeling Uncertainty in Non Stationary Process

    NASA Technical Reports Server (NTRS)

    Racette, Paul

    2010-01-01

    Characterization of non stationary and nonlinear processes is a challenge in many engineering and scientific disciplines. Climate change modeling and projection, retrieving information from Doppler measurements of hydrometeors, and modeling calibration architectures and algorithms in microwave radiometers are example applications that can benefit from improvements in the modeling and analysis of non stationary processes. Analyses of measured signals have traditionally been limited to a single measurement series. Ensemble Detection is a technique whereby mixing calibrated noise produces an ensemble measurement set. The collection of ensemble data sets enables new methods for analyzing random signals and offers powerful new approaches to studying and analyzing non stationary processes. Derived information contained in the dynamic stochastic moments of a process will enable many novel applications.

  5. Simulate different environments TDLAS On the analysis of the test signal strength

    NASA Astrophysics Data System (ADS)

    Li, Xin; Zhou, Tao; Jia, Xiaodong

    2014-12-01

    TDLAS system is the use of the wavelength tuning characteristics of the laser diode, for detecting the absorption spectrum of the gas absorption line. Detecting the gas space, temperature, pressure and flow rate and concentration. The use of laboratory techniques TDLAS gas detection, experimental simulation engine combustion water vapor and smoke. using an optical lens system receives the signal acquisition and signal interference test analysis. Analog water vapor and smoke in two different environments in the sample pool interference. In both experiments environmental interference gas absorption in the optical signal acquisition, signal amplitude variation analysis, and records related to the signal data. In order to study site conditions in the engine combustion process for signal acquisition provides an ideal experimental data .

  6. Convolutional neural networks for event-related potential detection: impact of the architecture.

    PubMed

    Cecotti, H

    2017-07-01

    The detection of brain responses at the single-trial level in the electroencephalogram (EEG) such as event-related potentials (ERPs) is a difficult problem that requires different processing steps to extract relevant discriminant features. While most of the signal and classification techniques for the detection of brain responses are based on linear algebra, different pattern recognition techniques such as convolutional neural network (CNN), as a type of deep learning technique, have shown some interests as they are able to process the signal after limited pre-processing. In this study, we propose to investigate the performance of CNNs in relation of their architecture and in relation to how they are evaluated: a single system for each subject, or a system for all the subjects. More particularly, we want to address the change of performance that can be observed between specifying a neural network to a subject, or by considering a neural network for a group of subjects, taking advantage of a larger number of trials from different subjects. The results support the conclusion that a convolutional neural network trained on different subjects can lead to an AUC above 0.9 by using an appropriate architecture using spatial filtering and shift invariant layers.

  7. The Fifth NASA Symposium on VLSI Design

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design.

  8. Evaluation of NASA speech encoder

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taddese, Biniyam Tesfaye; Antonsen, Thomas M.; Ott, Edward

    Classical analogs of the quantum mechanical concepts of the Loschmidt Echo and quantum fidelity are developed with the goal of detecting small perturbations in a closed wave chaotic region. Sensing techniques that employ a one-recording-channel time-reversal-mirror, which in turn relies on time reversal invariance and spatial reciprocity of the classical wave equation, are introduced. In analogy with quantum fidelity, we employ scattering fidelity techniques which work by comparing response signals of the scattering region, by means of cross correlation and mutual information of signals. The performance of the sensing techniques is compared for various perturbations induced experimentally in an acousticmore » resonant cavity. The acoustic signals are parametrically processed to mitigate the effect of dissipation and to vary the spatial diversity of the sensing schemes. In addition to static boundary condition perturbations at specified locations, perturbations to the medium of wave propagation are shown to be detectable, opening up various real world sensing applications in which a false negative cannot be tolerated.« less

  10. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-08-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  11. Imaging Reactive Oxygen Species-Induced Modifications in Living Systems.

    PubMed

    Maulucci, Giuseppe; Bačić, Goran; Bridal, Lori; Schmidt, Harald Hhw; Tavitian, Bertrand; Viel, Thomas; Utsumi, Hideo; Yalçın, A Süha; De Spirito, Marco

    2016-06-01

    Reactive Oxygen Species (ROS) may regulate signaling, ion channels, transcription factors, and biosynthetic processes. ROS-related diseases can be due to either a shortage or an excess of ROS. Since the biological activity of ROS depends on not only concentration but also spatiotemporal distribution, real-time imaging of ROS, possibly in vivo, has become a need for scientists, with potential for clinical translation. New imaging techniques as well as new contrast agents in clinically established modalities were developed in the previous decade. An ideal imaging technique should determine ROS changes with high spatio-temporal resolution, detect physiologically relevant variations in ROS concentration, and provide specificity toward different redox couples. Furthermore, for in vivo applications, bioavailability of sensors, tissue penetration, and a high signal-to-noise ratio are additional requirements to be satisfied. None of the presented techniques fulfill all requirements for clinical translation. The obvious way forward is to incorporate anatomical and functional imaging into a common hybrid-imaging platform. Antioxid. Redox Signal. 24, 939-958.

  12. Adaptive noise cancelling and time-frequency techniques for rail surface defect detection

    NASA Astrophysics Data System (ADS)

    Liang, B.; Iwnicki, S.; Ball, A.; Young, A. E.

    2015-03-01

    Adaptive noise cancelling (ANC) is a technique which is very effective to remove additive noises from the contaminated signals. It has been widely used in the fields of telecommunication, radar and sonar signal processing. However it was seldom used for the surveillance and diagnosis of mechanical systems before late of 1990s. As a promising technique it has gradually been exploited for the purpose of condition monitoring and fault diagnosis. Time-frequency analysis is another useful tool for condition monitoring and fault diagnosis purpose as time-frequency analysis can keep both time and frequency information simultaneously. This paper presents an ANC and time-frequency application for railway wheel flat and rail surface defect detection. The experimental results from a scaled roller test rig show that this approach can significantly reduce unwanted interferences and extract the weak signals from strong background noises. The combination of ANC and time-frequency analysis may provide us one of useful tools for condition monitoring and fault diagnosis of railway vehicles.

  13. Wavelet-domain de-noising technique for THz pulsed spectroscopy

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Gavdush, Arsenii A.; Fokina, Irina N.; Karasik, Valeriy E.; Reshetov, Igor V.; Kudrin, Konstantin G.; Nosov, Pavel A.; Yurchenko, Stanislav O.

    2014-09-01

    De-noising of terahertz (THz) pulsed spectroscopy (TPS) data is an essential problem, since a noise in the TPS system data prevents correct reconstruction of the sample spectral dielectric properties and to perform the sample internal structure studying. There are certain regions in TPS signal Fourier spectrum, where Fourier-domain signal-to-noise ratio is relatively small. Effective de-noising might potentially expand the range of spectrometer spectral sensitivity and reduce the time of waveform registration, which is an essential problem for biomedical applications of TPS. In this work, it is shown how the recent progress in signal processing in wavelet-domain could be used for TPS waveforms de-noising. It demonstrates the ability to perform effective de-noising of TPS data using the algorithm of the Fast Wavelet Transform (FWT). The results of the optimal wavelet basis selection and wavelet-domain thresholding technique selection are reported. Developed technique is implemented for reconstruction of in vivo healthy and deseased skin samplesspectral characteristics at THz frequency range.

  14. A novel technique for fetal heart rate estimation from Doppler ultrasound signal

    PubMed Central

    2011-01-01

    Background The currently used fetal monitoring instrumentation that is based on Doppler ultrasound technique provides the fetal heart rate (FHR) signal with limited accuracy. It is particularly noticeable as significant decrease of clinically important feature - the variability of FHR signal. The aim of our work was to develop a novel efficient technique for processing of the ultrasound signal, which could estimate the cardiac cycle duration with accuracy comparable to a direct electrocardiography. Methods We have proposed a new technique which provides the true beat-to-beat values of the FHR signal through multiple measurement of a given cardiac cycle in the ultrasound signal. The method consists in three steps: the dynamic adjustment of autocorrelation window, the adaptive autocorrelation peak detection and determination of beat-to-beat intervals. The estimated fetal heart rate values and calculated indices describing variability of FHR, were compared to the reference data obtained from the direct fetal electrocardiogram, as well as to another method for FHR estimation. Results The results revealed that our method increases the accuracy in comparison to currently used fetal monitoring instrumentation, and thus enables to calculate reliable parameters describing the variability of FHR. Relating these results to the other method for FHR estimation we showed that in our approach a much lower number of measured cardiac cycles was rejected as being invalid. Conclusions The proposed method for fetal heart rate determination on a beat-to-beat basis offers a high accuracy of the heart interval measurement enabling reliable quantitative assessment of the FHR variability, at the same time reducing the number of invalid cardiac cycle measurements. PMID:21999764

  15. Towards Intelligent Interpretation of Low Strain Pile Integrity Testing Results Using Machine Learning Techniques.

    PubMed

    Cui, De-Mi; Yan, Weizhong; Wang, Xiao-Quan; Lu, Lie-Min

    2017-10-25

    Low strain pile integrity testing (LSPIT), due to its simplicity and low cost, is one of the most popular NDE methods used in pile foundation construction. While performing LSPIT in the field is generally quite simple and quick, determining the integrity of the test piles by analyzing and interpreting the test signals (reflectograms) is still a manual process performed by experienced experts only. For foundation construction sites where the number of piles to be tested is large, it may take days before the expert can complete interpreting all of the piles and delivering the integrity assessment report. Techniques that can automate test signal interpretation, thus shortening the LSPIT's turnaround time, are of great business value and are in great need. Motivated by this need, in this paper, we develop a computer-aided reflectogram interpretation (CARI) methodology that can interpret a large number of LSPIT signals quickly and consistently. The methodology, built on advanced signal processing and machine learning technologies, can be used to assist the experts in performing both qualitative and quantitative interpretation of LSPIT signals. Specifically, the methodology can ease experts' interpretation burden by screening all test piles quickly and identifying a small number of suspected piles for experts to perform manual, in-depth interpretation. We demonstrate the methodology's effectiveness using the LSPIT signals collected from a number of real-world pile construction sites. The proposed methodology can potentially enhance LSPIT and make it even more efficient and effective in quality control of deep foundation construction.

  16. Error reduction study employing a pseudo-random binary sequence for use in acoustic pyrometry of gases

    NASA Astrophysics Data System (ADS)

    Ewan, B. C. R.; Ireland, S. N.

    2000-12-01

    Acoustic pyrometry uses the temperature dependence of sound speed in materials to measure temperature. This is normally achieved by measuring the transit time for a sound signal over a known path length and applying the material relation between temperature and velocity to extract an "average" temperature. Sources of error associated with the measurement of mean transit time are discussed in implementing the technique in gases, one of the principal causes being background noise in typical industrial environments. A number of transmitted signal and processing strategies which can be used in the area are examined and the expected error in mean transit time associated with each technique is quantified. Transmitted signals included pulses, pure frequencies, chirps, and pseudorandom binary sequences (prbs), while processing involves edge detection and correlation. Errors arise through the misinterpretation of the positions of edge arrival or correlation peaks due to instantaneous deviations associated with background noise and these become more severe as signal to noise amplitude ratios decrease. Population errors in the mean transit time are estimated for the different measurement strategies and it is concluded that PRBS combined with correlation can provide the lowest errors when operating in high noise environments. The operation of an instrument based on PRBS transmitted signals is described and test results under controlled noise conditions are presented. These confirm the value of the strategy and demonstrate that measurements can be made with signal to noise amplitude ratios down to 0.5.

  17. Equalizer design techniques for dispersive cables with application to the SPS wideband kicker

    NASA Astrophysics Data System (ADS)

    Platt, Jason; Hofle, Wolfgang; Pollock, Kristin; Fox, John

    2017-10-01

    A wide-band vertical instability feedback control system in development at CERN requires 1-1.5 GHz of bandwidth for the entire processing chain, from the beam pickups through the feedback signal digital processing to the back-end power amplifiers and kicker structures. Dispersive effects in cables, amplifiers, pickup and kicker elements can result in distortions in the time domain signal as it proceeds through the processing system, and deviations from linear phase response reduce the allowable bandwidth for the closed-loop feedback system. We have developed an equalizer analog circuit that compensates for these dispersive effects. Here we present a design technique for the construction of an analog equalizer that incorporates the effect of parasitic circuit elements in the equalizer to increase the fidelity of the implemented equalizer. Finally, we show results from the measurement of an assembled backend equalizer that corrects for dispersive elements in the cables over a bandwidth of 10-1000 MHz.

  18. Simultaneous spectrophotometric determination of indacaterol and glycopyrronium in a newly approved pharmaceutical formulation using different signal processing techniques of ratio spectra

    NASA Astrophysics Data System (ADS)

    Abdel Ghany, Maha F.; Hussein, Lobna A.; Magdy, Nancy; Yamani, Hend Z.

    2016-03-01

    Three spectrophotometric methods have been developed and validated for determination of indacaterol (IND) and glycopyrronium (GLY) in their binary mixtures and novel pharmaceutical dosage form. The proposed methods are considered to be the first methods to determine the investigated drugs simultaneously. The developed methods are based on different signal processing techniques of ratio spectra namely; Numerical Differentiation (ND), Savitsky-Golay (SG) and Fourier Transform (FT). The developed methods showed linearity over concentration range 1-30 and 10-35 (μg/mL) for IND and GLY, respectively. The accuracy calculated as percentage recoveries were in the range of 99.00%-100.49% with low value of RSD% (< 1.5%) demonstrating an excellent accuracy of the proposed methods. The developed methods were proved to be specific, sensitive and precise for quality control of the investigated drugs in their pharmaceutical dosage form without the need for any separation process.

  19. Paper-based microreactor array for rapid screening of cell signaling cascades.

    PubMed

    Huang, Chia-Hao; Lei, Kin Fong; Tsang, Ngan-Ming

    2016-08-07

    Investigation of cell signaling pathways is important for the study of pathogenesis of cancer. However, the related operations used in these studies are time consuming and labor intensive. Thus, the development of effective therapeutic strategies may be hampered. In this work, gel-free cell culture and subsequent immunoassay has been successfully integrated and conducted in a paper-based microreactor array. Study of the activation level of different kinases of cells stimulated by different conditions, i.e., IL-6 stimulation, starvation, and hypoxia, was demonstrated. Moreover, rapid screening of cell signaling cascades after the stimulations of HGF, doxorubicin, and UVB irradiation was respectively conducted to simultaneously screen 40 kinases and transcription factors. Activation of multi-signaling pathways could be identified and the correlation between signaling pathways was discussed to provide further information to investigate the entire signaling network. The present technique integrates most of the tedious operations using a single paper substrate, reduces sample and reagent consumption, and shortens the time required by the entire process. Therefore, it provides a first-tier rapid screening tool for the study of complicated signaling cascades. It is expected that the technique can be developed for routine protocol in conventional biological research laboratories.

  20. Signal processing methods for in-situ creep specimen monitoring

    NASA Astrophysics Data System (ADS)

    Guers, Manton J.; Tittmann, Bernhard R.

    2018-04-01

    Previous work investigated using guided waves for monitoring creep deformation during accelerated life testing. The basic objective was to relate observed changes in the time-of-flight to changes in the environmental temperature and specimen gage length. The work presented in this paper investigated several signal processing strategies for possible application in the in-situ monitoring system. Signal processing methods for both group velocity (wave-packet envelope) and phase velocity (peak tracking) time-of-flight were considered. Although the Analytic Envelope found via the Hilbert transform is commonly applied for group velocity measurements, erratic behavior in the indicated time-of-flight was observed when this technique was applied to the in-situ data. The peak tracking strategies tested had generally linear trends, and tracking local minima in the raw waveform ultimately showed the most consistent results.

  1. Computational Analysis and Simulation of Empathic Behaviors: a Survey of Empathy Modeling with Behavioral Signal Processing Framework.

    PubMed

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis; Atkins, David C; Narayanan, Shrikanth S

    2016-05-01

    Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, and facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation and offer a series of open problems for future research.

  2. Ultra-high throughput real-time instruments for capturing fast signals and rare events

    NASA Astrophysics Data System (ADS)

    Buckley, Brandon Walter

    Wide-band signals play important roles in the most exciting areas of science, engineering, and medicine. To keep up with the demands of exploding internet traffic, modern data centers and communication networks are employing increasingly faster data rates. Wide-band techniques such as pulsed radar jamming and spread spectrum frequency hopping are used on the battlefield to wrestle control of the electromagnetic spectrum. Neurons communicate with each other using transient action potentials that last for only milliseconds at a time. And in the search for rare cells, biologists flow large populations of cells single file down microfluidic channels, interrogating them one-by-one, tens of thousands of times per second. Studying and enabling such high-speed phenomena pose enormous technical challenges. For one, parasitic capacitance inherent in analog electrical components limits their response time. Additionally, converting these fast analog signals to the digital domain requires enormous sampling speeds, which can lead to significant jitter and distortion. State-of-the-art imaging technologies, essential for studying biological dynamics and cells in flow, are limited in speed and sensitivity by finite charge transfer and read rates, and by the small numbers of photo-electrons accumulated in short integration times. And finally, ultra-high throughput real-time digital processing is required at the backend to analyze the streaming data. In this thesis, I discuss my work in developing real-time instruments, employing ultrafast optical techniques, which overcome some of these obstacles. In particular, I use broadband dispersive optics to slow down fast signals to speeds accessible to high-bit depth digitizers and signal processors. I also apply telecommunication multiplexing techniques to boost the speeds of confocal fluorescence microscopy. The photonic time stretcher (TiSER) uses dispersive Fourier transformation to slow down analog signals before digitization and processing. The act of time-stretching effectively boosts the performance of the back-end electronics and digital signal processors. The slowed down signals reach the back-end electronics with reduced bandwidth, and are therefore less affected by high-frequency roll-off and distortion. Time-stretching also increases the effective sampling rate of analog-to-digital converters and reduces aperture jitter, thereby improving resolution. Finally, the instantaneous throughputs of digital signal processors are enhanced by the stretch factor to otherwise unattainable speeds. Leveraging these unique capabilities, TiSER becomes the ideal tool for capturing high-speed signals and characterizing rare phenomena. For this thesis, I have developed techniques to improve the spectral efficiency, bandwidth, and resolution of TiSER using polarization multiplexing, all-optical modulation, and coherent dispersive Fourier transformation. To reduce the latency and improve the data handling capacity, I have also designed and implemented a real-time digital signal processing electronic backend, achieving 1.5 tera-bit per second instantaneous processing throughput. Finally, I will present results from experiments highlighting TiSER's impact in real-world applications. Confocal fluorescence microscopy is the most widely used method for unveiling the molecular composition of biological specimens. However, the weak optical emission of fluorescent probes and the tradeoff between imaging speed and sensitivity is problematic for acquiring blur-free images of fast phenomena and cells flowing at high speed. Here I introduce a new fluorescence imaging modality, which leverages techniques from wireless communication to reach record pixel and frame rates. Termed Fluorescence Imaging using Radio-frequency tagged Emission (FIRE), this new imaging modality is capable of resolving never before seen dynamics in living cells - such as action potentials in neurons and metabolic waves in astrocytes - as well as performing high-content image assays of cells and particles in high-speed flow.

  3. Wab-InSAR: a new wavelet based InSAR time series technique applied to volcanic and tectonic areas

    NASA Astrophysics Data System (ADS)

    Walter, T. R.; Shirzaei, M.; Nankali, H.; Roustaei, M.

    2009-12-01

    Modern geodetic techniques such as InSAR and GPS provide valuable observations of the deformation field. Because of the variety of environmental interferences (e.g., atmosphere, topography distortion) and incompleteness of the models (assumption of the linear model for deformation), those observations are usually tainted by various systematic and random errors. Therefore we develop and test new methods to identify and filter unwanted periodic or episodic artifacts to obtain accurate and precise deformation measurements. Here we present and implement a new wavelet based InSAR (Wab-InSAR) time series approach. Because wavelets are excellent tools for identifying hidden patterns and capturing transient signals, we utilize wavelet functions for reducing the effect of atmospheric delay and digital elevation model inaccuracies. Wab-InSAR is a model free technique, reducing digital elevation model errors in individual interferograms using a 2D spatial Legendre polynomial wavelet filter. Atmospheric delays are reduced using a 3D spatio-temporal wavelet transform algorithm and a novel technique for pixel selection. We apply Wab-InSAR to several targets, including volcano deformation processes at Hawaii Island, and mountain building processes in Iran. Both targets are chosen to investigate large and small amplitude signals, variable and complex topography and atmospheric effects. In this presentation we explain different steps of the technique, validate the results by comparison to other high resolution processing methods (GPS, PS-InSAR, SBAS) and discuss the geophysical results.

  4. Efficient dynamic events discrimination technique for fiber distributed Brillouin sensors.

    PubMed

    Galindez, Carlos A; Madruga, Francisco J; Lopez-Higuera, Jose M

    2011-09-26

    A technique to detect real time variations of temperature or strain in Brillouin based distributed fiber sensors is proposed and is investigated in this paper. The technique is based on anomaly detection methods such as the RX-algorithm. Detection and isolation of dynamic events from the static ones are demonstrated by a proper processing of the Brillouin gain values obtained by using a standard BOTDA system. Results also suggest that better signal to noise ratio, dynamic range and spatial resolution can be obtained. For a pump pulse of 5 ns the spatial resolution is enhanced, (from 0.541 m obtained by direct gain measurement, to 0.418 m obtained with the technique here exposed) since the analysis is concentrated in the variation of the Brillouin gain and not only on the averaging of the signal along the time. © 2011 Optical Society of America

  5. A leakage-free resonance sparse decomposition technique for bearing fault detection in gearboxes

    NASA Astrophysics Data System (ADS)

    Osman, Shazali; Wang, Wilson

    2018-03-01

    Most of rotating machinery deficiencies are related to defects in rolling element bearings. Reliable bearing fault detection still remains a challenging task, especially for bearings in gearboxes as bearing-defect-related features are nonstationary and modulated by gear mesh vibration. A new leakage-free resonance sparse decomposition (LRSD) technique is proposed in this paper for early bearing fault detection of gearboxes. In the proposed LRSD technique, a leakage-free filter is suggested to remove strong gear mesh and shaft running signatures. A kurtosis and cosine distance measure is suggested to select appropriate redundancy r and quality factor Q. The signal residual is processed by signal sparse decomposition for highpass and lowpass resonance analysis to extract representative features for bearing fault detection. The effectiveness of the proposed technique is verified by a succession of experimental tests corresponding to different gearbox and bearing conditions.

  6. Electro-optic Mach-Zehnder Interferometer based Optical Digital Magnitude Comparator and 1's Complement Calculator

    NASA Astrophysics Data System (ADS)

    Kumar, Ajay; Raghuwanshi, Sanjeev Kumar

    2016-06-01

    The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.

  7. A cost-effective line-based light-balancing technique using adaptive processing.

    PubMed

    Hsia, Shih-Chang; Chen, Ming-Huei; Chen, Yu-Min

    2006-09-01

    The camera imaging system has been widely used; however, the displaying image appears to have an unequal light distribution. This paper presents novel light-balancing techniques to compensate uneven illumination based on adaptive signal processing. For text image processing, first, we estimate the background level and then process each pixel with nonuniform gain. This algorithm can balance the light distribution while keeping a high contrast in the image. For graph image processing, the adaptive section control using piecewise nonlinear gain is proposed to equalize the histogram. Simulations show that the performance of light balance is better than the other methods. Moreover, we employ line-based processing to efficiently reduce the memory requirement and the computational cost to make it applicable in real-time systems.

  8. Velocity interferometer signal de-noising using modified Wiener filter

    NASA Astrophysics Data System (ADS)

    Rav, Amit; Joshi, K. D.; Roy, Kallol; Kaushik, T. C.

    2017-05-01

    The accuracy and precision of the non-contact velocity interferometer system for any reflector (VISAR) depends not only on the good optical design and linear optical-to- electrical conversion system, but also on accurate and robust post-processing techniques. The performance of these techniques, such as the phase unwrapping algorithm, depends on the signal-to-noise ratio (SNR) of the recorded signal. In the present work, a novel method of improving the SNR of the recorded VISAR signal, based on the knowledge of the noise characteristic of the signal conversion and recording system, is presented. The proposed method uses a modified Wiener filter, for which the signal power spectrum estimation is obtained using a spectral subtraction method (SSM), and the noise power spectrum estimation is obtained by taking the average of the recorded signal during the period when no target movement is expected. Since the noise power spectrum estimate is dynamic in nature, and obtained for each experimental record individually, the improved signal quality is high. The proposed method is applied to the simulated standard signals, and is not only found to be better than the SSM, but is also less sensitive to the selection of the noise floor during signal power spectrum estimation. Finally, the proposed method is applied to the recorded experimental signal and an improvement in the SNR is reported.

  9. Electromagnetic Test-Facility characterization: an identification approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zicker, J.E.; Candy, J.V.

    The response of an object subjected to high energy, transient electromagnetic (EM) fields sometimes called electromagnetic pulses (EMP), is an important issue in the survivability of electronic systems (e.g., aircraft), especially when the field has been generated by a high altitude nuclear burst. The characterization of transient response information is a matter of national concern. In this report we discuss techniques to: (1) improve signal processing at a test facility; and (2) parameterize a particular object response. First, we discuss the application of identification-based signal processing techniques to improve signal levels at the Lawrence Livermore National Laboratory (LLNL) EM Transientmore » Test Facility. We identify models of test equipment and then use these models to deconvolve the input/output sequences for the object under test. A parametric model of the object is identified from this data. The model can be used to extrapolate the response to these threat level EMP. Also discussed is the development of a facility simulator (EMSIM) useful for experimental design and calibration and a deconvolution algorithm (DECONV) useful for removing probe effects from the measured data.« less

  10. Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters

    NASA Astrophysics Data System (ADS)

    Vasumathi, B.; Moorthi, S.

    2011-11-01

    In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.

  11. Motor imaginary-based brain-machine interface design using programmable logic controllers for the disabled.

    PubMed

    Jeyabalan, Vickneswaran; Samraj, Andrews; Loo, Chu Kiong

    2010-10-01

    Aiming at the implementation of brain-machine interfaces (BMI) for the aid of disabled people, this paper presents a system design for real-time communication between the BMI and programmable logic controllers (PLCs) to control an electrical actuator that could be used in devices to help the disabled. Motor imaginary signals extracted from the brain’s motor cortex using an electroencephalogram (EEG) were used as a control signal. The EEG signals were pre-processed by means of adaptive recursive band-pass filtrations (ARBF) and classified using simplified fuzzy adaptive resonance theory mapping (ARTMAP) in which the classified signals are then translated into control signals used for machine control via the PLC. A real-time test system was designed using MATLAB for signal processing, KEP-Ware V4 OLE for process control (OPC), a wireless local area network router, an Omron Sysmac CPM1 PLC and a 5 V/0.3A motor. This paper explains the signal processing techniques, the PLC's hardware configuration, OPC configuration and real-time data exchange between MATLAB and PLC using the MATLAB OPC toolbox. The test results indicate that the function of exchanging real-time data can be attained between the BMI and PLC through OPC server and proves that it is an effective and feasible method to be applied to devices such as wheelchairs or electronic equipment.

  12. A Novel Technique Applying Spectral Estimation to Johnson Noise Thermometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezell, N. Dianne Bull; Britton, Chuck; Ericson, Nance

    Johnson noise thermometry is one of many important measurement techniques used to monitor the safety levels and stability in a nuclear reactor. However, this measurement is very dependent on the minimal electromagnetic environment. Properly removing unwanted electromagnetic interference (EMI) is critical for accurate drift-free temperature measurements. The two techniques developed by Oak Ridge National Laboratory (ORNL) to remove transient and periodic EMI are briefly discussed here. Spectral estimation is a key component in the signal processing algorithm used for EMI removal and temperature calculation. The cross-power spectral density is a key component in the Johnson noise temperature computation. Applying eithermore » technique requires the simple addition of electronics and signal processing to existing resistive thermometers. With minimal installation changes, the system discussed here can be installed on existing nuclear power plants. The Johnson noise system developed is tested at three locations: ORNL, Sandia National Laboratory, and the Tennessee Valley Authority’s Kingston Fossil Plant. Each of these locations enabled improvement on the EMI removal algorithm. Finally, the conclusions made from the results at each of these locations is discussed, as well as possible future work.« less

  13. A Novel Technique Applying Spectral Estimation to Johnson Noise Thermometry

    DOE PAGES

    Ezell, N. Dianne Bull; Britton, Chuck; Ericson, Nance; ...

    2018-03-30

    Johnson noise thermometry is one of many important measurement techniques used to monitor the safety levels and stability in a nuclear reactor. However, this measurement is very dependent on the minimal electromagnetic environment. Properly removing unwanted electromagnetic interference (EMI) is critical for accurate drift-free temperature measurements. The two techniques developed by Oak Ridge National Laboratory (ORNL) to remove transient and periodic EMI are briefly discussed here. Spectral estimation is a key component in the signal processing algorithm used for EMI removal and temperature calculation. The cross-power spectral density is a key component in the Johnson noise temperature computation. Applying eithermore » technique requires the simple addition of electronics and signal processing to existing resistive thermometers. With minimal installation changes, the system discussed here can be installed on existing nuclear power plants. The Johnson noise system developed is tested at three locations: ORNL, Sandia National Laboratory, and the Tennessee Valley Authority’s Kingston Fossil Plant. Each of these locations enabled improvement on the EMI removal algorithm. Finally, the conclusions made from the results at each of these locations is discussed, as well as possible future work.« less

  14. Photoacoustic spectroscopy of condensed matter

    NASA Technical Reports Server (NTRS)

    Somoano, R. B.

    1978-01-01

    Photoacoustic spectroscopy is a new analytical tool that provides a simple nondestructive technique for obtaining information about the electronic absorption spectrum of samples such as powders, semisolids, gels, and liquids. It can also be applied to samples which cannot be examined by conventional optical methods. Numerous applications of this technique in the field of inorganic and organic semiconductors, biology, and catalysis have been described. Among the advantages of photoacoustic spectroscopy, the signal is almost insensitive to light scattering by the sample and information can be obtained about nonradiative deactivation processes. Signal saturation, which can modify the intensity of individual absorption bands in special cases, is a drawback of the method.

  15. Strong and Long Makes Short: Strong-Pump Strong-Probe Spectroscopy.

    PubMed

    Gelin, Maxim F; Egorova, Dassia; Domcke, Wolfgang

    2011-01-20

    We propose a new time-domain spectroscopic technique that is based on strong pump and probe pulses. The strong-pump strong-probe (SPSP) technique provides temporal resolution that is not limited by the durations of the pump and probe pulses. By numerically exact simulations of SPSP signals for a multilevel vibronic model, we show that the SPSP signals exhibit electronic and vibrational beatings on time scales which are significantly shorter than the pulse durations. This suggests the possible application of SPSP spectroscopy for the real-time investigation of molecular processes that cannot be temporally resolved by pump-probe spectroscopy with weak pump and probe pulses.

  16. Automated processing of label-free Raman microscope images of macrophage cells with standardized regression for high-throughput analysis.

    PubMed

    Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I

    2010-11-19

    Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without compromise in image quality or information loss in associated spectra. These results motivate further use of label free microscopy techniques in real-time imaging of live immune cells.

  17. Review of Microwave Photonics Technique to Generate the Microwave Signal by Using Photonics Technology

    NASA Astrophysics Data System (ADS)

    Raghuwanshi, Sanjeev Kumar; Srivastav, Akash

    2017-12-01

    Microwave photonics system provides high bandwidth capabilities of fiber optic systems and also contains the ability to provide interconnect transmission properties, which are virtually independent of length. The low-loss wide bandwidth capability of optoelectronic systems makes them attractive for the transmission and processing of microwave signals, while the development of high-capacity optical communication systems has required the use of microwave techniques in optical transmitters and receivers. These two strands have led to the development of the research area of microwave photonics. So, we can considered microwave photonics as the field that studies the interaction between microwave and optical waves for applications such as communications, radars, sensors and instrumentations. In this paper we have thoroughly reviewed the microwave generation techniques by using photonics technology.

  18. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  19. Surface-enhanced FAST CARS: en route to quantum nano-biophotonics

    NASA Astrophysics Data System (ADS)

    Voronine, Dmitri V.; Zhang, Zhenrong; Sokolov, Alexei V.; Scully, Marlan O.

    2018-02-01

    Quantum nano-biophotonics as the science of nanoscale light-matter interactions in biological systems requires developing new spectroscopic tools for addressing the challenges of detecting and disentangling weak congested optical signals. Nanoscale bio-imaging addresses the challenge of the detection of weak resonant signals from a few target biomolecules in the presence of the nonresonant background from many undesired molecules. In addition, the imaging must be performed rapidly to capture the dynamics of biological processes in living cells and tissues. Label-free non-invasive spectroscopic techniques are required to minimize the external perturbation effects on biological systems. Various approaches were developed to satisfy these requirements by increasing the selectivity and sensitivity of biomolecular detection. Coherent anti-Stokes Raman scattering (CARS) and surface-enhanced Raman scattering (SERS) spectroscopies provide many orders of magnitude enhancement of chemically specific Raman signals. Femtosecond adaptive spectroscopic techniques for CARS (FAST CARS) were developed to suppress the nonresonant background and optimize the efficiency of the coherent optical signals. This perspective focuses on the application of these techniques to nanoscale bio-imaging, discussing their advantages and limitations as well as the promising opportunities and challenges of the combined coherence and surface enhancements in surface-enhanced coherent anti-Stokes Raman scattering (SECARS) and tip-enhanced coherent anti-Stokes Raman scattering (TECARS) and the corresponding surface-enhanced FAST CARS techniques. Laser pulse shaping of near-field excitations plays an important role in achieving these goals and increasing the signal enhancement.

  20. A biological inspired fuzzy adaptive window median filter (FAWMF) for enhancing DNA signal processing.

    PubMed

    Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin

    2017-10-01

    Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Labview Based ECG Patient Monitoring System for Cardiovascular Patient Using SMTP Technology.

    PubMed

    Singh, Om Prakash; Mekonnen, Dawit; Malarvili, M B

    2015-01-01

    This paper leads to developing a Labview based ECG patient monitoring system for cardiovascular patient using Simple Mail Transfer Protocol technology. The designed device has been divided into three parts. First part is ECG amplifier circuit, built using instrumentation amplifier (AD620) followed by signal conditioning circuit with the operation amplifier (lm741). Secondly, the DAQ card is used to convert the analog signal into digital form for the further process. Furthermore, the data has been processed in Labview where the digital filter techniques have been implemented to remove the noise from the acquired signal. After processing, the algorithm was developed to calculate the heart rate and to analyze the arrhythmia condition. Finally, SMTP technology has been added in our work to make device more communicative and much more cost-effective solution in telemedicine technology which has been key-problem to realize the telediagnosis and monitoring of ECG signals. The technology also can be easily implemented over already existing Internet.

  2. Labview Based ECG Patient Monitoring System for Cardiovascular Patient Using SMTP Technology

    PubMed Central

    Singh, Om Prakash; Mekonnen, Dawit; Malarvili, M. B.

    2015-01-01

    This paper leads to developing a Labview based ECG patient monitoring system for cardiovascular patient using Simple Mail Transfer Protocol technology. The designed device has been divided into three parts. First part is ECG amplifier circuit, built using instrumentation amplifier (AD620) followed by signal conditioning circuit with the operation amplifier (lm741). Secondly, the DAQ card is used to convert the analog signal into digital form for the further process. Furthermore, the data has been processed in Labview where the digital filter techniques have been implemented to remove the noise from the acquired signal. After processing, the algorithm was developed to calculate the heart rate and to analyze the arrhythmia condition. Finally, SMTP technology has been added in our work to make device more communicative and much more cost-effective solution in telemedicine technology which has been key-problem to realize the telediagnosis and monitoring of ECG signals. The technology also can be easily implemented over already existing Internet. PMID:27006940

  3. A real-time GNSS-R system based on software-defined radio and graphics processing units

    NASA Astrophysics Data System (ADS)

    Hobiger, Thomas; Amagai, Jun; Aida, Masanori; Narita, Hideki

    2012-04-01

    Reflected signals of the Global Navigation Satellite System (GNSS) from the sea or land surface can be utilized to deduce and monitor physical and geophysical parameters of the reflecting area. Unlike most other remote sensing techniques, GNSS-Reflectometry (GNSS-R) operates as a passive radar that takes advantage from the increasing number of navigation satellites that broadcast their L-band signals. Thereby, most of the GNSS-R receiver architectures are based on dedicated hardware solutions. Software-defined radio (SDR) technology has advanced in the recent years and enabled signal processing in real-time, which makes it an ideal candidate for the realization of a flexible GNSS-R system. Additionally, modern commodity graphic cards, which offer massive parallel computing performances, allow to handle the whole signal processing chain without interfering with the PC's CPU. Thus, this paper describes a GNSS-R system which has been developed on the principles of software-defined radio supported by General Purpose Graphics Processing Units (GPGPUs), and presents results from initial field tests which confirm the anticipated capability of the system.

  4. Optimal Sensor Management and Signal Processing for New EMI Systems

    DTIC Science & Technology

    2010-09-01

    adaptive techniques that would improve the speed of data collection and increase the mobility of a TEMTADS system. Although an active learning technique...data, SIG has simulated the active selection based on the data already collected at Camp SLO. In this setup, the active learning approach was constrained...to work only on a 5x5 grid (corresponding to twenty five transmitters and co-located receivers). The first technique assumes that active learning will

  5. Performance Analysis of a Hardware Implemented Complex Signal Kurtosis Radio-Frequency Interference Detector

    NASA Technical Reports Server (NTRS)

    Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark

    2016-01-01

    Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves.

  6. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR

    PubMed Central

    Mobli, Mehdi; Hoch, Jeffrey C.

    2017-01-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. PMID:25456315

  7. Mortar and artillery variants classification by exploiting characteristics of the acoustic signature

    NASA Astrophysics Data System (ADS)

    Hohil, Myron E.; Grasing, David; Desai, Sachi; Morcos, Amir

    2007-10-01

    Feature extraction methods based on the discrete wavelet transform and multiresolution analysis facilitate the development of a robust classification algorithm that reliably discriminates mortar and artillery variants via acoustic signals produced during the launch/impact events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants. Distinct characteristics arise within the different mortar variants because varying HE mortar payloads and related charges emphasize concussive and shrapnel effects upon impact employing varying magnitude explosions. The different mortar variants are characterized by variations in the resulting waveform of the event. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing techniques can employed to classify a given set. The DWT and other readily available signal processing techniques will be used to extract the predominant components of these characteristics from the acoustic signatures at ranges exceeding 2km. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of wavelet coefficients, frequency spectrum, and higher frequency details found within different levels of the multiresolution decomposition. The process that will be described herein extends current technologies, which emphasis multi modal sensor fusion suites to provide such situational awareness. A two fold problem of energy consumption and line of sight arise with the multi modal sensor suites. The process described within will exploit the acoustic properties of the event to provide variant classification as added situational awareness to the solider.

  8. Digital pulse shape discrimination.

    PubMed

    Miller, L F; Preston, J; Pozzi, S; Flaska, M; Neal, J

    2007-01-01

    Pulse-shape discrimination (PSD) has been utilised for about 40 years as a method to obtain estimates for dose in mixed neutron and photon fields. Digitizers that operate close to GHz are currently available at a reasonable cost, and they can be used to directly sample signals from photomultiplier tubes. This permits one to perform digital PSD rather than the traditional, and well-established, analogoue techniques. One issue that complicates PSD for neutrons in mixed fields is that the light output characteristics of typical scintillators available for PSD, such as BC501A, vary as a function of energy deposited in the detector. This behaviour is more easily accommodated with digital processing of signals than with analogoue signal processing. Results illustrate the effectiveness of digital PSD.

  9. Superresolution fluorescence imaging by pump-probe setup using repetitive stimulated transition process

    NASA Astrophysics Data System (ADS)

    Dake, Fumihiro; Fukutake, Naoki; Hayashi, Seri; Taki, Yusuke

    2018-02-01

    We proposed superresolution nonlinear fluorescence microscopy with pump-probe setup that utilizes repetitive stimulated absorption and stimulated emission caused by two-color laser beams. The resulting nonlinear fluorescence that undergoes such a repetitive stimulated transition is detectable as a signal via the lock-in technique. As the nonlinear fluorescence signal is produced by the multi-ply combination of incident beams, the optical resolution can be improved. A theoretical model of the nonlinear optical process is provided using rate equations, which offers phenomenological interpretation of nonlinear fluorescence and estimation of the signal properties. The proposed method is demonstrated as having the scalability of optical resolution. Theoretical resolution and bead image are also estimated to validate the experimental result.

  10. Resonant fiber optic gyro based on a sinusoidal wave modulation and square wave demodulation technique.

    PubMed

    Wang, Linglan; Yan, Yuchao; Ma, Huilian; Jin, Zhonghe

    2016-04-20

    New developments are made in the resonant fiber optic gyro (RFOG), which is an optical sensor for the measurement of rotation rate. The digital signal processing system based on the phase modulation technique is capable of detecting the weak frequency difference induced by the Sagnac effect and suppressing the reciprocal noise in the circuit, which determines the detection sensitivity of the RFOG. A new technique based on the sinusoidal wave modulation and square wave demodulation is implemented, and the demodulation curve of the system is simulated and measured. Compared with the past technique using sinusoidal modulation and demodulation, it increases the slope of the demodulation curve by a factor of 1.56, improves the spectrum efficiency of the modulated signal, and reduces the occupancy of the field-programmable gate array resource. On the basis of this new phase modulation technique, the loop is successfully locked and achieves a short-term bias stability of 1.08°/h, which is improved by a factor of 1.47.

  11. Identification and quantitation of semi-crystalline microplastics using image analysis and differential scanning calorimetry.

    PubMed

    Rodríguez Chialanza, Mauricio; Sierra, Ignacio; Pérez Parada, Andrés; Fornaro, Laura

    2018-06-01

    There are several techniques used to analyze microplastics. These are often based on a combination of visual and spectroscopic techniques. Here we introduce an alternative workflow for identification and mass quantitation through a combination of optical microscopy with image analysis (IA) and differential scanning calorimetry (DSC). We studied four synthetic polymers with environmental concern: low and high density polyethylene (LDPE and HDPE, respectively), polypropylene (PP), and polyethylene terephthalate (PET). Selected experiments were conducted to investigate (i) particle characterization and counting procedures based on image analysis with open-source software, (ii) chemical identification of microplastics based on DSC signal processing, (iii) dependence of particle size on DSC signal, and (iv) quantitation of microplastics mass based on DSC signal. We describe the potential and limitations of these techniques to increase reliability for microplastic analysis. Particle size demonstrated to have particular incidence in the qualitative and quantitative performance of DSC signals. Both, identification (based on characteristic onset temperature) and mass quantitation (based on heat flow) showed to be affected by particle size. As a result, a proper sample treatment which includes sieving of suspended particles is particularly required for this analytical approach.

  12. Photonic Materials and Devices for RF (mmW) Sensing and Imaging

    DTIC Science & Technology

    2012-12-31

    wave encoding thereby eliminating the need for bulky LO distribution cables. Also, optical processing techniques can be utilized to provide simple... optical powers, can be close to unity and low -noise photodetectors make the detection of exceedingly low power millimeter-waves practical. In... optically -filtering the modulated signal to pass only a single sideband and detecting the resultant optical signal with a low -noise photodetector we have

  13. Rapid sonic characterisation of sewer change and obstructions.

    PubMed

    Podd, F J; Ali, M T B; Horoshenkov, K V; Wood, A S; Tait, S J; Boot, J C; Long, R; Saul, A J

    2007-01-01

    This paper reports on the development of a low-cost, rapidly deployable sensor for surveying live sewers for blockages and structural failures. The anticipated cost is an order of magnitude lower than current techniques. The technology is based on acoustic normal model decomposition, The instrument emits short coded acoustic signals which are reflected from any sewer wall defect. The acoustic signals can be short Gaussian pulses or longer sinusoidal sweeps and pseudo-random noise. The processing algorithms used on the reflected signal can predict the extent and geometry of the pipe deformation, and the locations and approximate size of common blockages. The effect of the water level on the frequency of the fundamental mode has also been investigated. It is shown that the technique can be adapted to work reliably in relatively large 600 mm diameter sewer pipes.

  14. Novel Oversampling Technique for Improving Signal-to-Quantization Noise Ratio on Accelerometer-Based Smart Jerk Sensors in CNC Applications.

    PubMed

    Rangel-Magdaleno, Jose J; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Cabal-Yepez, Eduardo

    2009-01-01

    Jerk monitoring, defined as the first derivative of acceleration, has become a major issue in computerized numeric controlled (CNC) machines. Several works highlight the necessity of measuring jerk in a reliable way for improving production processes. Nowadays, the computation of jerk is done by finite differences of the acceleration signal, computed at the Nyquist rate, which leads to low signal-to-quantization noise ratio (SQNR) during the estimation. The novelty of this work is the development of a smart sensor for jerk monitoring from a standard accelerometer, which has improved SQNR. The proposal is based on oversampling techniques that give a better estimation of jerk than that produced by a Nyquist-rate differentiator. Simulations and experimental results are presented to show the overall methodology performance.

  15. Network Anomaly Detection Based on Wavelet Analysis

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Ghorbani, Ali A.

    2008-12-01

    Signal processing techniques have been applied recently for analyzing and detecting network anomalies due to their potential to find novel or unknown intrusions. In this paper, we propose a new network signal modelling technique for detecting network anomalies, combining the wavelet approximation and system identification theory. In order to characterize network traffic behaviors, we present fifteen features and use them as the input signals in our system. We then evaluate our approach with the 1999 DARPA intrusion detection dataset and conduct a comprehensive analysis of the intrusions in the dataset. Evaluation results show that the approach achieves high-detection rates in terms of both attack instances and attack types. Furthermore, we conduct a full day's evaluation in a real large-scale WiFi ISP network where five attack types are successfully detected from over 30 millions flows.

  16. Nanosensors based on functionalized nanoparticles and surface enhanced raman scattering

    DOEpatents

    Talley, Chad E.; Huser, Thomas R.; Hollars, Christopher W.; Lane, Stephen M.; Satcher, Jr., Joe H.; Hart, Bradley R.; Laurence, Ted A.

    2007-11-27

    Surface-Enhanced Raman Spectroscopy (SERS) is a vibrational spectroscopic technique that utilizes metal surfaces to provide enhanced signals of several orders of magnitude. When molecules of interest are attached to designed metal nanoparticles, a SERS signal is attainable with single molecule detection limits. This provides an ultrasensitive means of detecting the presence of molecules. By using selective chemistries, metal nanoparticles can be functionalized to provide a unique signal upon analyte binding. Moreover, by using measurement techniques, such as, ratiometric received SERS spectra, such metal nanoparticles can be used to monitor dynamic processes in addition to static binding events. Accordingly, such nanoparticles can be used as nanosensors for a wide range of chemicals in fluid, gaseous and solid form, environmental sensors for pH, ion concentration, temperature, etc., and biological sensors for proteins, DNA, RNA, etc.

  17. Design of a portable noninvasive photoacoustic glucose monitoring system integrated laser diode excitation with annular array detection

    NASA Astrophysics Data System (ADS)

    Zeng, Lvming; Liu, Guodong; Yang, Diwu; Ren, Zhong; Huang, Zhen

    2008-12-01

    A near-infrared photoacoustic glucose monitoring system, which is integrated dual-wavelength pulsed laser diode excitation with eight-element planar annular array detection technique, is designed and fabricated during this study. It has the characteristics of nonivasive, inexpensive, portable, accurate location, and high signal-to-noise ratio. In the system, the exciting source is based on two laser diodes with wavelengths of 905 nm and 1550 nm, respectively, with optical pulse energy of 20 μJ and 6 μJ. The laser beam is optically focused and jointly projected to a confocal point with a diameter of 0.7 mm approximately. A 7.5 MHz 8-element annular array transducer with a hollow structure is machined to capture photoacoustic signal in backward mode. The captured signals excitated from blood glucose are processed with a synthetic focusing algorithm to obtain high signal-to-noise ratio and accurate location over a range of axial detection depth. The custom-made transducer with equal area elements is coaxially collimated with the laser source to improve the photoacoustic excite/receive efficiency. In the paper, we introduce the photoacoustic theory, receive/process technique, and design method of the portable noninvasive photoacoustic glucose monitoring system, which can potentially be developed as a powerful diagnosis and treatment tool for diabetes mellitus.

  18. Smart Sound Processing for Defect Sizing in Pipelines Using EMAT Actuator Based Multi-Frequency Lamb Waves

    PubMed Central

    García-Gómez, Joaquín; Rosa-Zurera, Manuel; Romero-Camacho, Antonio; Jiménez-Garrido, Jesús Antonio; García-Benavides, Víctor

    2018-01-01

    Pipeline inspection is a topic of particular interest to the companies. Especially important is the defect sizing, which allows them to avoid subsequent costly repairs in their equipment. A solution for this issue is using ultrasonic waves sensed through Electro-Magnetic Acoustic Transducer (EMAT) actuators. The main advantage of this technology is the absence of the need to have direct contact with the surface of the material under investigation, which must be a conductive one. Specifically interesting is the meander-line-coil based Lamb wave generation, since the directivity of the waves allows a study based in the circumferential wrap-around received signal. However, the variety of defect sizes changes the behavior of the signal when it passes through the pipeline. Because of that, it is necessary to apply advanced techniques based on Smart Sound Processing (SSP). These methods involve extracting useful information from the signals sensed with EMAT at different frequencies to obtain nonlinear estimations of the depth of the defect, and to select the features that better estimate the profile of the pipeline. The proposed technique has been tested using both simulated and real signals in steel pipelines, obtaining good results in terms of Root Mean Square Error (RMSE). PMID:29518927

  19. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gongzhang, R.; Xiao, B.; Lardner, T.

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signalsmore » through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.« less

  20. Single baseline GLONASS observations with VLBI: data processing and first results

    NASA Astrophysics Data System (ADS)

    Tornatore, V.; Haas, R.; Duev, D.; Pogrebenko, S.; Casey, S.; Molera Calvés, G.; Keimpema, A.

    2011-07-01

    Several tests to observe signals transmitted by GLONASS (GLObal NAvigation Satellite System) satellites have been performed using the geodetic VLBI (Very Long Baseline Interferometry) technique. The radio telescopes involved in these experiments were Medicina (Italy) and Onsala (Sweden), both equipped with L-band receivers. Observations at the stations were performed using the standard Mark4 VLBI data acquisition rack and Mark5A disk-based recorders. The goals of the observations were to develop and test the scheduling, signal acquisition and processing routines to verify the full tracking pipeline, foreseeing the cross-correlation of the recorded data on the baseline Onsala-Medicina. The natural radio source 3c286 was used as a calibrator before the starting of the satellite observation sessions. Delay models, including the tropospheric and ionospheric corrections, which are consistent for both far- and near-field sources are under development. Correlation of the calibrator signal has been performed using the DiFX software, while the satellite signals have been processed using the narrow band approach with the Metsaehovi software and analysed with a near-field delay model. Delay models both for the calibrator signals and the satellites signals, using the same geometrical, tropospheric and ionospheric models, are under investigation to make a correlation of the satellite signals possible.

  1. Development of signal processing algorithms for ultrasonic detection of coal seam interfaces

    NASA Technical Reports Server (NTRS)

    Purcell, D. D.; Ben-Bassat, M.

    1976-01-01

    A pattern recognition system is presented for determining the thickness of coal remaining on the roof and floor of a coal seam. The system was developed to recognize reflected pulse echo signals that are generated by an acoustical transducer and reflected from the coal seam interface. The flexibility of the system, however, should enable it to identify pulse-echo signals generated by radar or other techniques. The main difference being the specific features extracted from the recorded data as a basis for pattern recognition.

  2. Impulse Response Measurements Over Space-Earth Paths Using the GPS Coarse/Acquisition Codes

    NASA Technical Reports Server (NTRS)

    Lemmon, J. J.; Papazian, P. B.

    1995-01-01

    The impulse responses of radio transmission channels over space-earth paths were measured using the course/acquisition code signals from the Global Positioning System of satellites. The data acquisition system and signal processing techniques used to develop the impulse responses are described. Examples of impulse response measurements are presented. The results indicate that this measurement approach enables detection of multipath signals that are 20 dB or more below the power of the direct arrival. Channel characteristics that could be investigated with additional measurements and analyses are discussed.

  3. Smith predictor with sliding mode control for processes with large dead times

    NASA Astrophysics Data System (ADS)

    Mehta, Utkal; Kaya, İbrahim

    2017-11-01

    The paper discusses the Smith Predictor scheme with Sliding Mode Controller (SP-SMC) for processes with large dead times. This technique gives improved load-disturbance rejection with optimum input control signal variations. A power rate reaching law is incorporated in the sporadic part of sliding mode control such that the overall performance recovers meaningfully. The proposed scheme obtains parameter values by satisfying a new performance index which is based on biobjective constraint. In simulation study, the efficiency of the method is evaluated for robustness and transient performance over reported techniques.

  4. Wavelet-Based Signal and Image Processing for Target Recognition

    NASA Astrophysics Data System (ADS)

    Sherlock, Barry G.

    2002-11-01

    The PI visited NSWC Dahlgren, VA, for six weeks in May-June 2002 and collaborated with scientists in the G33 TEAMS facility, and with Marilyn Rudzinsky of T44 Technology and Photonic Systems Branch. During this visit the PI also presented six educational seminars to NSWC scientists on various aspects of signal processing. Several items from the grant proposal were completed, including (1) wavelet-based algorithms for interpolation of 1-d signals and 2-d images; (2) Discrete Wavelet Transform domain based algorithms for filtering of image data; (3) wavelet-based smoothing of image sequence data originally obtained for the CRITTIR (Clutter Rejection Involving Temporal Techniques in the Infra-Red) project. The PI visited the University of Stellenbosch, South Africa to collaborate with colleagues Prof. B.M. Herbst and Prof. J. du Preez on the use of wavelet image processing in conjunction with pattern recognition techniques. The University of Stellenbosch has offered the PI partial funding to support a sabbatical visit in Fall 2003, the primary purpose of which is to enable the PI to develop and enhance his expertise in Pattern Recognition. During the first year, the grant supported publication of 3 referred papers, presentation of 9 seminars and an intensive two-day course on wavelet theory. The grant supported the work of two students who functioned as research assistants.

  5. Timing Recovery Strategies in Magnetic Recording Systems

    NASA Astrophysics Data System (ADS)

    Kovintavewat, Piya

    At some point in a digital communications receiver, the received analog signal must be sampled. Good performance requires that these samples be taken at the right times. The process of synchronizing the sampler with the received analog waveform is known as timing recovery. Conventional timing recovery techniques perform well only when operating at high signal-to-noise ratio (SNR). Nonetheless, iterative error-control codes allow reliable communication at very low SNR, where conventional techniques fail. This paper provides a detailed review on the timing recovery strategies based on per-survivor processing (PSP) that are capable of working at low SNR. We also investigate their performance in magnetic recording systems because magnetic recording is a primary method of storage for a variety of applications, including desktop, mobile, and server systems. Results indicate that the timing recovery strategies based on PSP perform better than the conventional ones and are thus worth being employed in magnetic recording systems.

  6. Speckle reduction in echocardiography by temporal compounding and anisotropic diffusion filtering

    NASA Astrophysics Data System (ADS)

    Giraldo-Guzmán, Jader; Porto-Solano, Oscar; Cadena-Bonfanti, Alberto; Contreras-Ortiz, Sonia H.

    2015-01-01

    Echocardiography is a medical imaging technique based on ultrasound signals that is used to evaluate heart anatomy and physiology. Echocardiographic images are affected by speckle, a type of multiplicative noise that obscures details of the structures, and reduces the overall image quality. This paper shows an approach to enhance echocardiography using two processing techniques: temporal compounding and anisotropic diffusion filtering. We used twenty echocardiographic videos that include one or three cardiac cycles to test the algorithms. Two images from each cycle were aligned in space and averaged to obtain the compound images. These images were then processed using anisotropic diffusion filters to further improve their quality. Resultant images were evaluated using quality metrics and visual assessment by two medical doctors. The average total improvement on signal-to-noise ratio was up to 100.29% for videos with three cycles, and up to 32.57% for videos with one cycle.

  7. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  8. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  9. Sensor Fusion Techniques for Phased-Array Eddy Current and Phased-Array Ultrasound Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arrowood, Lloyd F.

    Sensor (or Data) fusion is the process of integrating multiple data sources to produce more consistent, accurate and comprehensive information than is provided by a single data source. Sensor fusion may also be used to combine multiple signals from a single modality to improve the performance of a particular inspection technique. Industrial nondestructive testing may utilize multiple sensors to acquire inspection data depending upon the object under inspection and the anticipated types of defects that can be identified. Sensor fusion can be performed at various levels of signal abstraction with each having its strengths and weaknesses. A multimodal data fusionmore » strategy first proposed by Heideklang and Shokouhi that combines spatially scattered detection locations to improve detection performance of surface-breaking and near-surface cracks in ferromagnetic metals is shown using a surface inspection example and is then extended for volumetric inspections. Utilizing data acquired from an Olympus Omniscan MX2 from both phased array eddy current and ultrasound probes on test phantoms, single and multilevel fusion techniques are employed to integrate signals from the two modalities. Preliminary results demonstrate how confidence in defect identification and interpretation benefit from sensor fusion techniques. Lastly, techniques for integrating data into radiographic and volumetric imagery from computed tomography are described and results are presented.« less

  10. High resolution beamforming on large aperture vertical line arrays: Processing synthetic data

    NASA Astrophysics Data System (ADS)

    Tran, Jean-Marie Q.; Hodgkiss, William S.

    1990-09-01

    This technical memorandum studies the beamforming of large aperture line arrays deployed vertically in the water column. The work concentrates on the use of high resolution techniques. Two processing strategies are envisioned: (1) full aperture coherent processing which offers in theory the best processing gain; and (2) subaperture processing which consists in extracting subapertures from the array and recombining the angular spectra estimated from these subarrays. The conventional beamformer, the minimum variance distortionless response (MVDR) processor, the multiple signal classification (MUSIC) algorithm and the minimum norm method are used in this study. To validate the various processing techniques, the ATLAS normal mode program is used to generate synthetic data which constitute a realistic signals environment. A deep-water, range-independent sound velocity profile environment, characteristic of the North-East Pacific, is being studied for two different 128 sensor arrays: a very long one cut for 30 Hz and operating at 20 Hz; and a shorter one cut for 107 Hz and operating at 100 Hz. The simulated sound source is 5 m deep. The full aperture and subaperture processing are being implemented with curved and plane wavefront replica vectors. The beamforming results are examined and compared to the ray-theory results produced by the generic sonar model.

  11. Flexible organic TFT bio-signal amplifier using reliable chip component assembly process with conductive adhesive.

    PubMed

    Yoshimoto, Shusuke; Uemura, Takafumi; Akiyama, Mihoko; Ihara, Yoshihiro; Otake, Satoshi; Fujii, Tomoharu; Araki, Teppei; Sekitani, Tsuyoshi

    2017-07-01

    This paper presents a flexible organic thin-film transistor (OTFT) amplifier for bio-signal monitoring and presents the chip component assembly process. Using a conductive adhesive and a chip mounter, the chip components are mounted on a flexible film substrate, which has OTFT circuits. This study first investigates the assembly technique reliability for chip components on the flexible substrate. This study also specifically examines heart pulse wave monitoring conducted using the proposed flexible amplifier circuit and a flexible piezoelectric film. We connected the amplifier to a bluetooth device for a wearable device demonstration.

  12. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, H. M. Abdul; Ukkusuri, Satish V.

    We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less

  13. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures

    DOE PAGES

    Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2017-06-29

    We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less

  14. Automated Identification and Shape Analysis of Chorus Elements in the Van Allen Radiation Belts

    NASA Astrophysics Data System (ADS)

    Sen Gupta, Ananya; Kletzing, Craig; Howk, Robin; Kurth, William; Matheny, Morgan

    2017-12-01

    An important goal of the Van Allen Probes mission is to understand wave-particle interaction by chorus emissions in terrestrial Van Allen radiation belts. To test models, statistical characterization of chorus properties, such as amplitude variation and sweep rates, is an important scientific goal. The Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrumentation suite provides measurements of wave electric and magnetic fields as well as DC magnetic fields for the Van Allen Probes mission. However, manual inspection across terabytes of EMFISIS data is not feasible and as such introduces human confirmation bias. We present signal processing techniques for automated identification, shape analysis, and sweep rate characterization of high-amplitude whistler-mode chorus elements in the Van Allen radiation belts. Specifically, we develop signal processing techniques based on the radon transform that disambiguate chorus elements with a dominant sweep rate against hiss-like chorus. We present representative results validating our techniques and also provide statistical characterization of detected chorus elements across a case study of a 6 s epoch.

  15. High resolution imaging of objects located within a wall

    NASA Astrophysics Data System (ADS)

    Greneker, Eugene F.; Showman, Gregory A.; Trostel, John M.; Sylvester, Vincent

    2006-05-01

    Researchers at Georgia Tech Research Institute have developed a high resolution imaging radar technique that allows large sections of a test wall to be scanned in X and Y dimensions. The resulting images that can be obtained provide information on what is inside the wall, if anything. The scanning homodyne radar operates at a frequency of 24.1 GHz at with an output power level of approximately 10 milliwatts. An imaging technique that has been developed is currently being used to study the detection of toxic mold on the back surface of wallboard using radar as a sensor. The moisture that is associated with the mold can easily be detected. In addition to mold, the technique will image objects as small as a 4 millimeter sphere on the front or rear of the wallboard and will penetrate both sides of a wall made of studs and wallboard. Signal processing is performed on the resulting data to further sharpen the image. Photos of the scanner and images produced by the scanner are presented. A discussion of the signal processing and technical challenges are also discussed.

  16. Enhancement of signal-to-noise ratio in Brillouin optical time domain analyzers by dual-probe detection

    NASA Astrophysics Data System (ADS)

    Iribas, Haritz; Loayssa, Alayn; Sauser, Florian; Llera, Miguel; Le Floch, Sébastien

    2017-04-01

    We demonstrate a simple technique to enhance the signal-to-noise ratio (SNR) in Brillouin optical time-domain analysis sensors by the addition of gain and loss processes. The technique is based on the shift of the pump pulse optical frequency in a double-sideband probe system, so that the gain and loss processes take place at different frequencies. In this manner, the loss and the gain do not cancel each other out, and it makes possible to take advantage of both informations at the same time, obtaining an improvement of 3 dB on the SNR. Furthermore, the technique does not need an optical filtering, so that larger improvement on SNR and a simplification of the setup are obtained. The method is experimentally demonstrated in a 101 km fiber spool, obtaining a measurement uncertainty of 2.6 MHz (2σ) at the worst-contrast position for 2 m spatial resolution. This leads, to the best of our knowledge, to the highest figure-of-merit in a BOTDA without using coding or raman amplification.

  17. Discrimination techniques employing both reflective and thermal multispectral signals. [for remote sensor technology

    NASA Technical Reports Server (NTRS)

    Malila, W. A.; Crane, R. B.; Richardson, W.

    1973-01-01

    Recent improvements in remote sensor technology carry implications for data processing. Multispectral line scanners now exist that can collect data simultaneously and in registration in multiple channels at both reflective and thermal (emissive) wavelengths. Progress in dealing with two resultant recognition processing problems is discussed: (1) More channels mean higher processing costs; to combat these costs, a new and faster procedure for selecting subsets of channels has been developed. (2) Differences between thermal and reflective characteristics influence recognition processing; to illustrate the magnitude of these differences, some explanatory calculations are presented. Also introduced, is a different way to process multispectral scanner data, namely, radiation balance mapping and related procedures. Techniques and potentials are discussed and examples presented.

  18. Multitaper spectral analysis of atmospheric radar signals

    NASA Astrophysics Data System (ADS)

    Anandan, V.; Pan, C.; Rajalakshmi, T.; Ramachandra Reddy, G.

    2004-11-01

    Multitaper spectral analysis using sinusoidal taper has been carried out on the backscattered signals received from the troposphere and lower stratosphere by the Gadanki Mesosphere-Stratosphere-Troposphere (MST) radar under various conditions of the signal-to-noise ratio. Comparison of study is made with sinusoidal taper of the order of three and single tapers of Hanning and rectangular tapers, to understand the relative merits of processing under the scheme. Power spectra plots show that echoes are better identified in the case of multitaper estimation, especially in the region of a weak signal-to-noise ratio. Further analysis is carried out to obtain three lower order moments from three estimation techniques. The results show that multitaper analysis gives a better signal-to-noise ratio or higher detectability. The spectral analysis through multitaper and single tapers is subjected to study of consistency in measurements. Results show that the multitaper estimate is better consistent in Doppler measurements compared to single taper estimates. Doppler width measurements with different approaches were studied and the results show that the estimation was better in the multitaper technique in terms of temporal resolution and estimation accuracy.

  19. A luminescence lifetime assisted ratiometric fluorimeter for biological applications

    PubMed Central

    Lam, Hung; Kostov, Yordan; Rao, Govind; Tolosa, Leah

    2009-01-01

    In general, the most difficult task in developing devices for fluorescence ratiometric sensing is the isolation of signals from overlapping emission wavelengths. Wavelength discrimination can be achieved by using monochromators or bandpass filters, which often lead to decreased signal intensities. The result is a device that is both complex and expensive. Here we present an alternative system—a low-cost standalone optical fluorimeter based on luminescence lifetime assisted ratiometric sensing (LARS). This paper describes the principle of this technique and the overall design of the sensor device. The most significant innovation of LARS is the ability to discriminate between two overlapping luminescence signals based on differences in their luminescence decay rates. Thus, minimal filtering is required and the two signals can be isolated despite significant overlap of luminescence spectra. The result is a device that is both simple and inexpensive. The electronic circuit employs the lock-in amplification technique for the signal processing and the system is controlled by an onboard microcontroller. In addition, the system is designed to communicate with external devices via Bluetooth. PMID:20059156

  20. A luminescence lifetime assisted ratiometric fluorimeter for biological applications.

    PubMed

    Lam, Hung; Kostov, Yordan; Rao, Govind; Tolosa, Leah

    2009-12-01

    In general, the most difficult task in developing devices for fluorescence ratiometric sensing is the isolation of signals from overlapping emission wavelengths. Wavelength discrimination can be achieved by using monochromators or bandpass filters, which often lead to decreased signal intensities. The result is a device that is both complex and expensive. Here we present an alternative system--a low-cost standalone optical fluorimeter based on luminescence lifetime assisted ratiometric sensing (LARS). This paper describes the principle of this technique and the overall design of the sensor device. The most significant innovation of LARS is the ability to discriminate between two overlapping luminescence signals based on differences in their luminescence decay rates. Thus, minimal filtering is required and the two signals can be isolated despite significant overlap of luminescence spectra. The result is a device that is both simple and inexpensive. The electronic circuit employs the lock-in amplification technique for the signal processing and the system is controlled by an onboard microcontroller. In addition, the system is designed to communicate with external devices via Bluetooth.

Top